00:00:00.001 Started by upstream project "autotest-per-patch" build number 132528 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.251 The recommended git tool is: git 00:00:02.252 using credential 00000000-0000-0000-0000-000000000002 00:00:02.254 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.267 Fetching changes from the remote Git repository 00:00:02.270 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.282 Using shallow fetch with depth 1 00:00:02.282 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.282 > git --version # timeout=10 00:00:02.297 > git --version # 'git version 2.39.2' 00:00:02.297 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.313 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.313 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.164 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.180 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.193 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.193 > git config core.sparsecheckout # timeout=10 00:00:09.206 > git read-tree -mu HEAD # timeout=10 00:00:09.224 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.247 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.247 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.332 [Pipeline] Start of Pipeline 00:00:09.352 [Pipeline] library 00:00:09.355 Loading library shm_lib@master 00:00:09.355 Library shm_lib@master is cached. Copying from home. 00:00:09.370 [Pipeline] node 00:00:09.398 Running on WFP66 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:09.399 [Pipeline] { 00:00:09.411 [Pipeline] catchError 00:00:09.413 [Pipeline] { 00:00:09.423 [Pipeline] wrap 00:00:09.430 [Pipeline] { 00:00:09.436 [Pipeline] stage 00:00:09.438 [Pipeline] { (Prologue) 00:00:09.641 [Pipeline] sh 00:00:09.929 + logger -p user.info -t JENKINS-CI 00:00:09.958 [Pipeline] echo 00:00:09.960 Node: WFP66 00:00:09.970 [Pipeline] sh 00:00:10.288 [Pipeline] setCustomBuildProperty 00:00:10.304 [Pipeline] echo 00:00:10.306 Cleanup processes 00:00:10.312 [Pipeline] sh 00:00:10.599 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.599 3160737 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.616 [Pipeline] sh 00:00:10.910 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.910 ++ grep -v 'sudo pgrep' 00:00:10.910 ++ awk '{print $1}' 00:00:10.910 + sudo kill -9 00:00:10.910 + true 00:00:10.926 [Pipeline] cleanWs 00:00:10.937 [WS-CLEANUP] Deleting project workspace... 00:00:10.937 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.944 [WS-CLEANUP] done 00:00:10.949 [Pipeline] setCustomBuildProperty 00:00:10.964 [Pipeline] sh 00:00:11.250 + sudo git config --global --replace-all safe.directory '*' 00:00:11.402 [Pipeline] httpRequest 00:00:11.807 [Pipeline] echo 00:00:11.809 Sorcerer 10.211.164.101 is alive 00:00:11.819 [Pipeline] retry 00:00:11.821 [Pipeline] { 00:00:11.835 [Pipeline] httpRequest 00:00:11.840 HttpMethod: GET 00:00:11.840 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.840 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.847 Response Code: HTTP/1.1 200 OK 00:00:11.847 Success: Status code 200 is in the accepted range: 200,404 00:00:11.848 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.918 [Pipeline] } 00:00:35.934 [Pipeline] // retry 00:00:35.940 [Pipeline] sh 00:00:36.220 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.234 [Pipeline] httpRequest 00:00:36.690 [Pipeline] echo 00:00:36.692 Sorcerer 10.211.164.101 is alive 00:00:36.703 [Pipeline] retry 00:00:36.705 [Pipeline] { 00:00:36.720 [Pipeline] httpRequest 00:00:36.724 HttpMethod: GET 00:00:36.725 URL: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:00:36.726 Sending request to url: http://10.211.164.101/packages/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:00:36.731 Response Code: HTTP/1.1 200 OK 00:00:36.731 Success: Status code 200 is in the accepted range: 200,404 00:00:36.732 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:06:15.017 [Pipeline] } 00:06:15.036 [Pipeline] // retry 00:06:15.044 [Pipeline] sh 00:06:15.416 + tar --no-same-owner -xf spdk_f7ce15267707aa0a59fa142564fc34607599b496.tar.gz 00:06:18.749 [Pipeline] sh 00:06:19.035 + git -C spdk log --oneline -n5 00:06:19.035 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:06:19.035 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:06:19.035 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:06:19.035 27c6508ea bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:06:19.035 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:06:19.046 [Pipeline] } 00:06:19.060 [Pipeline] // stage 00:06:19.070 [Pipeline] stage 00:06:19.073 [Pipeline] { (Prepare) 00:06:19.090 [Pipeline] writeFile 00:06:19.104 [Pipeline] sh 00:06:19.388 + logger -p user.info -t JENKINS-CI 00:06:19.401 [Pipeline] sh 00:06:19.684 + logger -p user.info -t JENKINS-CI 00:06:19.696 [Pipeline] sh 00:06:19.979 + cat autorun-spdk.conf 00:06:19.979 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:19.979 SPDK_TEST_FUZZER_SHORT=1 00:06:19.979 SPDK_TEST_FUZZER=1 00:06:19.979 SPDK_TEST_SETUP=1 00:06:19.979 SPDK_RUN_UBSAN=1 00:06:19.986 RUN_NIGHTLY=0 00:06:19.992 [Pipeline] readFile 00:06:20.025 [Pipeline] withEnv 00:06:20.028 [Pipeline] { 00:06:20.043 [Pipeline] sh 00:06:20.331 + set -ex 00:06:20.331 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:06:20.331 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:20.331 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.331 ++ SPDK_TEST_FUZZER_SHORT=1 00:06:20.331 ++ SPDK_TEST_FUZZER=1 00:06:20.331 ++ SPDK_TEST_SETUP=1 00:06:20.331 ++ SPDK_RUN_UBSAN=1 00:06:20.331 ++ RUN_NIGHTLY=0 00:06:20.331 + case $SPDK_TEST_NVMF_NICS in 00:06:20.331 + DRIVERS= 00:06:20.331 + [[ -n '' ]] 00:06:20.331 + exit 0 00:06:20.341 [Pipeline] } 00:06:20.356 [Pipeline] // withEnv 00:06:20.362 [Pipeline] } 00:06:20.377 [Pipeline] // stage 00:06:20.388 [Pipeline] catchError 00:06:20.390 [Pipeline] { 00:06:20.403 [Pipeline] timeout 00:06:20.403 Timeout set to expire in 30 min 00:06:20.405 [Pipeline] { 00:06:20.420 [Pipeline] stage 00:06:20.423 [Pipeline] { (Tests) 00:06:20.440 [Pipeline] sh 00:06:20.725 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:20.725 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:20.725 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:06:20.725 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:06:20.725 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:20.725 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:06:20.725 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:06:20.725 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:06:20.725 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:06:20.725 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:06:20.725 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:06:20.725 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:20.725 + source /etc/os-release 00:06:20.725 ++ NAME='Fedora Linux' 00:06:20.725 ++ VERSION='39 (Cloud Edition)' 00:06:20.725 ++ ID=fedora 00:06:20.725 ++ VERSION_ID=39 00:06:20.725 ++ VERSION_CODENAME= 00:06:20.725 ++ PLATFORM_ID=platform:f39 00:06:20.725 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:20.725 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:20.725 ++ LOGO=fedora-logo-icon 00:06:20.725 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:20.725 ++ HOME_URL=https://fedoraproject.org/ 00:06:20.725 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:20.725 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:20.725 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:20.725 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:20.725 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:20.725 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:20.725 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:20.725 ++ SUPPORT_END=2024-11-12 00:06:20.725 ++ VARIANT='Cloud Edition' 00:06:20.725 ++ VARIANT_ID=cloud 00:06:20.725 + uname -a 00:06:20.725 Linux spdk-wfp-66 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:20.725 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:22.628 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:06:22.886 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:06:22.886 Hugepages 00:06:22.886 node hugesize free / total 00:06:22.886 node0 1048576kB 0 / 0 00:06:22.886 node0 2048kB 0 / 0 00:06:22.886 node1 1048576kB 0 / 0 00:06:22.886 node1 2048kB 0 / 0 00:06:22.886 00:06:22.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:22.886 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:22.886 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:22.886 VMD 0000:5d:05.5 8086 201d 0 vfio-pci - - 00:06:22.886 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:22.886 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:22.886 VMD 0000:ae:05.5 8086 201d 1 vfio-pci - - 00:06:22.886 NVMe 0000:d9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:22.886 + rm -f /tmp/spdk-ld-path 00:06:22.886 + source autorun-spdk.conf 00:06:22.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:22.886 ++ SPDK_TEST_FUZZER_SHORT=1 00:06:22.886 ++ SPDK_TEST_FUZZER=1 00:06:22.887 ++ SPDK_TEST_SETUP=1 00:06:22.887 ++ SPDK_RUN_UBSAN=1 00:06:22.887 ++ RUN_NIGHTLY=0 00:06:22.887 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:22.887 + [[ -n '' ]] 00:06:22.887 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:22.887 + for M in /var/spdk/build-*-manifest.txt 00:06:22.887 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:22.887 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:22.887 + for M in /var/spdk/build-*-manifest.txt 00:06:22.887 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:22.887 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:22.887 + for M in /var/spdk/build-*-manifest.txt 00:06:22.887 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:22.887 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:22.887 ++ uname 00:06:22.887 + [[ Linux == \L\i\n\u\x ]] 00:06:22.887 + sudo dmesg -T 00:06:23.145 + sudo dmesg --clear 00:06:23.145 + dmesg_pid=3162745 00:06:23.145 + [[ Fedora Linux == FreeBSD ]] 00:06:23.145 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:23.145 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:23.145 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:23.145 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:23.145 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:23.145 + [[ -x /usr/src/fio-static/fio ]] 00:06:23.145 + sudo dmesg -Tw 00:06:23.145 + export FIO_BIN=/usr/src/fio-static/fio 00:06:23.145 + FIO_BIN=/usr/src/fio-static/fio 00:06:23.145 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:23.145 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:23.145 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:23.145 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:23.145 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:23.145 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:23.145 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:23.145 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:23.145 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:23.145 18:03:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:23.145 18:03:00 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:06:23.145 18:03:00 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:06:23.145 18:03:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:23.145 18:03:00 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:23.145 18:03:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:23.145 18:03:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:23.145 18:03:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:23.145 18:03:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:23.145 18:03:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.145 18:03:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.145 18:03:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.145 18:03:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.145 18:03:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.145 18:03:00 -- paths/export.sh@5 -- $ export PATH 00:06:23.145 18:03:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.145 18:03:00 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:06:23.145 18:03:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:23.145 18:03:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732640580.XXXXXX 00:06:23.145 18:03:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732640580.3ZIlJD 00:06:23.145 18:03:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:23.145 18:03:00 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:23.145 18:03:00 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:06:23.145 18:03:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:23.145 18:03:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:23.145 18:03:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:23.145 18:03:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:23.145 18:03:00 -- common/autotest_common.sh@10 -- $ set +x 00:06:23.145 18:03:00 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:23.145 18:03:00 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:23.145 18:03:00 -- pm/common@17 -- $ local monitor 00:06:23.145 18:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:23.145 18:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:23.145 18:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:23.145 18:03:00 -- pm/common@21 -- $ date +%s 00:06:23.145 18:03:00 -- pm/common@21 -- $ date +%s 00:06:23.145 18:03:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:23.145 18:03:00 -- pm/common@21 -- $ date +%s 00:06:23.145 18:03:00 -- pm/common@25 -- $ sleep 1 00:06:23.145 18:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640580 00:06:23.145 18:03:00 -- pm/common@21 -- $ date +%s 00:06:23.145 18:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640580 00:06:23.145 18:03:00 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640580 00:06:23.145 18:03:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640580 00:06:23.403 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640580_collect-cpu-temp.pm.log 00:06:23.403 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640580_collect-vmstat.pm.log 00:06:23.403 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640580_collect-cpu-load.pm.log 00:06:23.403 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640580_collect-bmc-pm.bmc.pm.log 00:06:24.339 18:03:01 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:24.339 18:03:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:24.339 18:03:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:24.339 18:03:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:24.340 18:03:01 -- spdk/autobuild.sh@16 -- $ date -u 00:06:24.340 Tue Nov 26 05:03:01 PM UTC 2024 00:06:24.340 18:03:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:24.340 v25.01-pre-268-gf7ce15267 00:06:24.340 18:03:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:24.340 18:03:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:24.340 18:03:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:24.340 18:03:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:24.340 18:03:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:24.340 18:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:06:24.340 ************************************ 00:06:24.340 START TEST ubsan 00:06:24.340 ************************************ 00:06:24.340 18:03:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:24.340 using ubsan 00:06:24.340 00:06:24.340 real 0m0.000s 00:06:24.340 user 0m0.000s 00:06:24.340 sys 0m0.000s 00:06:24.340 18:03:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:24.340 18:03:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:24.340 ************************************ 00:06:24.340 END TEST ubsan 00:06:24.340 ************************************ 00:06:24.340 18:03:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:24.340 18:03:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:24.340 18:03:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:24.340 18:03:01 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:06:24.340 18:03:01 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:06:24.340 18:03:01 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:06:24.340 18:03:01 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:06:24.340 18:03:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:24.340 18:03:01 -- common/autotest_common.sh@10 -- $ set +x 00:06:24.340 ************************************ 00:06:24.340 START TEST autobuild_llvm_precompile 00:06:24.340 ************************************ 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:06:24.340 Target: x86_64-redhat-linux-gnu 00:06:24.340 Thread model: posix 00:06:24.340 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:06:24.340 18:03:01 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:24.598 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:24.598 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:25.165 Using 'verbs' RDMA provider 00:06:38.307 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:53.209 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:53.209 Creating mk/config.mk...done. 00:06:53.209 Creating mk/cc.flags.mk...done. 00:06:53.209 Type 'make' to build. 00:06:53.209 00:06:53.209 real 0m27.174s 00:06:53.209 user 0m13.706s 00:06:53.209 sys 0m12.455s 00:06:53.209 18:03:28 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:53.209 18:03:28 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:06:53.209 ************************************ 00:06:53.209 END TEST autobuild_llvm_precompile 00:06:53.209 ************************************ 00:06:53.209 18:03:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:53.209 18:03:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:53.209 18:03:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:53.209 18:03:28 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:06:53.209 18:03:28 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:53.209 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:53.209 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:53.209 Using 'verbs' RDMA provider 00:07:05.420 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:15.393 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:15.393 Creating mk/config.mk...done. 00:07:15.393 Creating mk/cc.flags.mk...done. 00:07:15.393 Type 'make' to build. 00:07:15.393 18:03:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:07:15.393 18:03:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:15.393 18:03:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:15.393 18:03:52 -- common/autotest_common.sh@10 -- $ set +x 00:07:15.393 ************************************ 00:07:15.393 START TEST make 00:07:15.393 ************************************ 00:07:15.393 18:03:52 make -- common/autotest_common.sh@1129 -- $ make -j112 00:07:15.651 make[1]: Nothing to be done for 'all'. 00:07:17.033 The Meson build system 00:07:17.033 Version: 1.5.0 00:07:17.033 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:07:17.033 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:17.033 Build type: native build 00:07:17.033 Project name: libvfio-user 00:07:17.033 Project version: 0.0.1 00:07:17.033 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:07:17.033 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:07:17.034 Host machine cpu family: x86_64 00:07:17.034 Host machine cpu: x86_64 00:07:17.034 Run-time dependency threads found: YES 00:07:17.034 Library dl found: YES 00:07:17.034 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:17.034 Run-time dependency json-c found: YES 0.17 00:07:17.034 Run-time dependency cmocka found: YES 1.1.7 00:07:17.034 Program pytest-3 found: NO 00:07:17.034 Program flake8 found: NO 00:07:17.034 Program misspell-fixer found: NO 00:07:17.034 Program restructuredtext-lint found: NO 00:07:17.034 Program valgrind found: YES (/usr/bin/valgrind) 00:07:17.034 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:17.034 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:17.034 Compiler for C supports arguments -Wwrite-strings: YES 00:07:17.034 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:17.034 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:17.034 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:17.034 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:17.034 Build targets in project: 8 00:07:17.034 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:17.034 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:17.034 00:07:17.034 libvfio-user 0.0.1 00:07:17.034 00:07:17.034 User defined options 00:07:17.034 buildtype : debug 00:07:17.034 default_library: static 00:07:17.034 libdir : /usr/local/lib 00:07:17.034 00:07:17.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:17.601 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:17.601 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:07:17.601 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:17.601 [3/36] Compiling C object samples/null.p/null.c.o 00:07:17.601 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:07:17.601 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:07:17.601 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:17.601 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:17.601 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:07:17.601 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:07:17.601 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:17.601 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:17.601 [12/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:17.601 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:17.601 [14/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:17.601 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:17.601 [16/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:17.601 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:07:17.601 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:07:17.601 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:07:17.601 [20/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:07:17.601 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:17.601 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:17.601 [23/36] Compiling C object samples/server.p/server.c.o 00:07:17.601 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:17.601 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:17.601 [26/36] Compiling C object samples/client.p/client.c.o 00:07:17.859 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:07:17.859 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:17.859 [29/36] Linking static target lib/libvfio-user.a 00:07:17.859 [30/36] Linking target samples/client 00:07:17.859 [31/36] Linking target test/unit_tests 00:07:17.859 [32/36] Linking target samples/server 00:07:17.859 [33/36] Linking target samples/null 00:07:17.859 [34/36] Linking target samples/lspci 00:07:17.859 [35/36] Linking target samples/shadow_ioeventfd_server 00:07:17.859 [36/36] Linking target samples/gpio-pci-idio-16 00:07:17.859 INFO: autodetecting backend as ninja 00:07:17.859 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:17.859 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:18.425 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:18.425 ninja: no work to do. 00:07:23.690 The Meson build system 00:07:23.690 Version: 1.5.0 00:07:23.690 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:07:23.690 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:07:23.690 Build type: native build 00:07:23.690 Program cat found: YES (/usr/bin/cat) 00:07:23.690 Project name: DPDK 00:07:23.690 Project version: 24.03.0 00:07:23.690 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:07:23.690 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:07:23.690 Host machine cpu family: x86_64 00:07:23.690 Host machine cpu: x86_64 00:07:23.690 Message: ## Building in Developer Mode ## 00:07:23.690 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:23.690 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:23.690 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:23.690 Program python3 found: YES (/usr/bin/python3) 00:07:23.690 Program cat found: YES (/usr/bin/cat) 00:07:23.690 Compiler for C supports arguments -march=native: YES 00:07:23.690 Checking for size of "void *" : 8 00:07:23.690 Checking for size of "void *" : 8 (cached) 00:07:23.690 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:23.690 Library m found: YES 00:07:23.690 Library numa found: YES 00:07:23.690 Has header "numaif.h" : YES 00:07:23.690 Library fdt found: NO 00:07:23.690 Library execinfo found: NO 00:07:23.690 Has header "execinfo.h" : YES 00:07:23.690 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:23.690 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:23.690 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:23.690 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:23.690 Run-time dependency openssl found: YES 3.1.1 00:07:23.690 Run-time dependency libpcap found: YES 1.10.4 00:07:23.690 Has header "pcap.h" with dependency libpcap: YES 00:07:23.690 Compiler for C supports arguments -Wcast-qual: YES 00:07:23.690 Compiler for C supports arguments -Wdeprecated: YES 00:07:23.690 Compiler for C supports arguments -Wformat: YES 00:07:23.690 Compiler for C supports arguments -Wformat-nonliteral: YES 00:07:23.690 Compiler for C supports arguments -Wformat-security: YES 00:07:23.690 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:23.690 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:23.690 Compiler for C supports arguments -Wnested-externs: YES 00:07:23.690 Compiler for C supports arguments -Wold-style-definition: YES 00:07:23.690 Compiler for C supports arguments -Wpointer-arith: YES 00:07:23.691 Compiler for C supports arguments -Wsign-compare: YES 00:07:23.691 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:23.691 Compiler for C supports arguments -Wundef: YES 00:07:23.691 Compiler for C supports arguments -Wwrite-strings: YES 00:07:23.691 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:23.691 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:07:23.691 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:23.691 Program objdump found: YES (/usr/bin/objdump) 00:07:23.691 Compiler for C supports arguments -mavx512f: YES 00:07:23.691 Checking if "AVX512 checking" compiles: YES 00:07:23.691 Fetching value of define "__SSE4_2__" : 1 00:07:23.691 Fetching value of define "__AES__" : 1 00:07:23.691 Fetching value of define "__AVX__" : 1 00:07:23.691 Fetching value of define "__AVX2__" : 1 00:07:23.691 Fetching value of define "__AVX512BW__" : 1 00:07:23.691 Fetching value of define "__AVX512CD__" : 1 00:07:23.691 Fetching value of define "__AVX512DQ__" : 1 00:07:23.691 Fetching value of define "__AVX512F__" : 1 00:07:23.691 Fetching value of define "__AVX512VL__" : 1 00:07:23.691 Fetching value of define "__PCLMUL__" : 1 00:07:23.691 Fetching value of define "__RDRND__" : 1 00:07:23.691 Fetching value of define "__RDSEED__" : 1 00:07:23.691 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:23.691 Fetching value of define "__znver1__" : (undefined) 00:07:23.691 Fetching value of define "__znver2__" : (undefined) 00:07:23.691 Fetching value of define "__znver3__" : (undefined) 00:07:23.691 Fetching value of define "__znver4__" : (undefined) 00:07:23.691 Compiler for C supports arguments -Wno-format-truncation: NO 00:07:23.691 Message: lib/log: Defining dependency "log" 00:07:23.691 Message: lib/kvargs: Defining dependency "kvargs" 00:07:23.691 Message: lib/telemetry: Defining dependency "telemetry" 00:07:23.691 Checking for function "getentropy" : NO 00:07:23.691 Message: lib/eal: Defining dependency "eal" 00:07:23.691 Message: lib/ring: Defining dependency "ring" 00:07:23.691 Message: lib/rcu: Defining dependency "rcu" 00:07:23.691 Message: lib/mempool: Defining dependency "mempool" 00:07:23.691 Message: lib/mbuf: Defining dependency "mbuf" 00:07:23.691 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:23.691 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:23.691 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:23.691 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:23.691 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:23.691 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:23.691 Compiler for C supports arguments -mpclmul: YES 00:07:23.691 Compiler for C supports arguments -maes: YES 00:07:23.691 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:23.691 Compiler for C supports arguments -mavx512bw: YES 00:07:23.691 Compiler for C supports arguments -mavx512dq: YES 00:07:23.691 Compiler for C supports arguments -mavx512vl: YES 00:07:23.691 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:23.691 Compiler for C supports arguments -mavx2: YES 00:07:23.691 Compiler for C supports arguments -mavx: YES 00:07:23.691 Message: lib/net: Defining dependency "net" 00:07:23.691 Message: lib/meter: Defining dependency "meter" 00:07:23.691 Message: lib/ethdev: Defining dependency "ethdev" 00:07:23.691 Message: lib/pci: Defining dependency "pci" 00:07:23.691 Message: lib/cmdline: Defining dependency "cmdline" 00:07:23.691 Message: lib/hash: Defining dependency "hash" 00:07:23.691 Message: lib/timer: Defining dependency "timer" 00:07:23.691 Message: lib/compressdev: Defining dependency "compressdev" 00:07:23.691 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:23.691 Message: lib/dmadev: Defining dependency "dmadev" 00:07:23.691 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:23.691 Message: lib/power: Defining dependency "power" 00:07:23.691 Message: lib/reorder: Defining dependency "reorder" 00:07:23.691 Message: lib/security: Defining dependency "security" 00:07:23.691 Has header "linux/userfaultfd.h" : YES 00:07:23.691 Has header "linux/vduse.h" : YES 00:07:23.691 Message: lib/vhost: Defining dependency "vhost" 00:07:23.691 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:07:23.691 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:23.691 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:23.691 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:23.691 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:23.691 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:23.691 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:23.691 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:23.691 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:23.691 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:23.691 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:23.691 Configuring doxy-api-html.conf using configuration 00:07:23.691 Configuring doxy-api-man.conf using configuration 00:07:23.691 Program mandb found: YES (/usr/bin/mandb) 00:07:23.691 Program sphinx-build found: NO 00:07:23.691 Configuring rte_build_config.h using configuration 00:07:23.691 Message: 00:07:23.691 ================= 00:07:23.691 Applications Enabled 00:07:23.691 ================= 00:07:23.691 00:07:23.691 apps: 00:07:23.691 00:07:23.691 00:07:23.691 Message: 00:07:23.691 ================= 00:07:23.691 Libraries Enabled 00:07:23.691 ================= 00:07:23.691 00:07:23.691 libs: 00:07:23.691 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:23.691 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:23.691 cryptodev, dmadev, power, reorder, security, vhost, 00:07:23.691 00:07:23.691 Message: 00:07:23.691 =============== 00:07:23.691 Drivers Enabled 00:07:23.691 =============== 00:07:23.691 00:07:23.691 common: 00:07:23.691 00:07:23.691 bus: 00:07:23.691 pci, vdev, 00:07:23.691 mempool: 00:07:23.691 ring, 00:07:23.691 dma: 00:07:23.691 00:07:23.691 net: 00:07:23.691 00:07:23.691 crypto: 00:07:23.691 00:07:23.691 compress: 00:07:23.691 00:07:23.691 vdpa: 00:07:23.691 00:07:23.691 00:07:23.691 Message: 00:07:23.691 ================= 00:07:23.691 Content Skipped 00:07:23.691 ================= 00:07:23.691 00:07:23.691 apps: 00:07:23.691 dumpcap: explicitly disabled via build config 00:07:23.691 graph: explicitly disabled via build config 00:07:23.691 pdump: explicitly disabled via build config 00:07:23.691 proc-info: explicitly disabled via build config 00:07:23.691 test-acl: explicitly disabled via build config 00:07:23.691 test-bbdev: explicitly disabled via build config 00:07:23.691 test-cmdline: explicitly disabled via build config 00:07:23.691 test-compress-perf: explicitly disabled via build config 00:07:23.691 test-crypto-perf: explicitly disabled via build config 00:07:23.691 test-dma-perf: explicitly disabled via build config 00:07:23.691 test-eventdev: explicitly disabled via build config 00:07:23.691 test-fib: explicitly disabled via build config 00:07:23.691 test-flow-perf: explicitly disabled via build config 00:07:23.691 test-gpudev: explicitly disabled via build config 00:07:23.691 test-mldev: explicitly disabled via build config 00:07:23.691 test-pipeline: explicitly disabled via build config 00:07:23.691 test-pmd: explicitly disabled via build config 00:07:23.691 test-regex: explicitly disabled via build config 00:07:23.691 test-sad: explicitly disabled via build config 00:07:23.691 test-security-perf: explicitly disabled via build config 00:07:23.691 00:07:23.691 libs: 00:07:23.691 argparse: explicitly disabled via build config 00:07:23.691 metrics: explicitly disabled via build config 00:07:23.691 acl: explicitly disabled via build config 00:07:23.691 bbdev: explicitly disabled via build config 00:07:23.691 bitratestats: explicitly disabled via build config 00:07:23.691 bpf: explicitly disabled via build config 00:07:23.691 cfgfile: explicitly disabled via build config 00:07:23.691 distributor: explicitly disabled via build config 00:07:23.691 efd: explicitly disabled via build config 00:07:23.691 eventdev: explicitly disabled via build config 00:07:23.691 dispatcher: explicitly disabled via build config 00:07:23.691 gpudev: explicitly disabled via build config 00:07:23.691 gro: explicitly disabled via build config 00:07:23.691 gso: explicitly disabled via build config 00:07:23.691 ip_frag: explicitly disabled via build config 00:07:23.691 jobstats: explicitly disabled via build config 00:07:23.691 latencystats: explicitly disabled via build config 00:07:23.691 lpm: explicitly disabled via build config 00:07:23.691 member: explicitly disabled via build config 00:07:23.691 pcapng: explicitly disabled via build config 00:07:23.691 rawdev: explicitly disabled via build config 00:07:23.691 regexdev: explicitly disabled via build config 00:07:23.691 mldev: explicitly disabled via build config 00:07:23.691 rib: explicitly disabled via build config 00:07:23.691 sched: explicitly disabled via build config 00:07:23.691 stack: explicitly disabled via build config 00:07:23.691 ipsec: explicitly disabled via build config 00:07:23.691 pdcp: explicitly disabled via build config 00:07:23.691 fib: explicitly disabled via build config 00:07:23.691 port: explicitly disabled via build config 00:07:23.691 pdump: explicitly disabled via build config 00:07:23.691 table: explicitly disabled via build config 00:07:23.691 pipeline: explicitly disabled via build config 00:07:23.691 graph: explicitly disabled via build config 00:07:23.691 node: explicitly disabled via build config 00:07:23.691 00:07:23.691 drivers: 00:07:23.691 common/cpt: not in enabled drivers build config 00:07:23.691 common/dpaax: not in enabled drivers build config 00:07:23.691 common/iavf: not in enabled drivers build config 00:07:23.691 common/idpf: not in enabled drivers build config 00:07:23.691 common/ionic: not in enabled drivers build config 00:07:23.691 common/mvep: not in enabled drivers build config 00:07:23.691 common/octeontx: not in enabled drivers build config 00:07:23.691 bus/auxiliary: not in enabled drivers build config 00:07:23.691 bus/cdx: not in enabled drivers build config 00:07:23.691 bus/dpaa: not in enabled drivers build config 00:07:23.692 bus/fslmc: not in enabled drivers build config 00:07:23.692 bus/ifpga: not in enabled drivers build config 00:07:23.692 bus/platform: not in enabled drivers build config 00:07:23.692 bus/uacce: not in enabled drivers build config 00:07:23.692 bus/vmbus: not in enabled drivers build config 00:07:23.692 common/cnxk: not in enabled drivers build config 00:07:23.692 common/mlx5: not in enabled drivers build config 00:07:23.692 common/nfp: not in enabled drivers build config 00:07:23.692 common/nitrox: not in enabled drivers build config 00:07:23.692 common/qat: not in enabled drivers build config 00:07:23.692 common/sfc_efx: not in enabled drivers build config 00:07:23.692 mempool/bucket: not in enabled drivers build config 00:07:23.692 mempool/cnxk: not in enabled drivers build config 00:07:23.692 mempool/dpaa: not in enabled drivers build config 00:07:23.692 mempool/dpaa2: not in enabled drivers build config 00:07:23.692 mempool/octeontx: not in enabled drivers build config 00:07:23.692 mempool/stack: not in enabled drivers build config 00:07:23.692 dma/cnxk: not in enabled drivers build config 00:07:23.692 dma/dpaa: not in enabled drivers build config 00:07:23.692 dma/dpaa2: not in enabled drivers build config 00:07:23.692 dma/hisilicon: not in enabled drivers build config 00:07:23.692 dma/idxd: not in enabled drivers build config 00:07:23.692 dma/ioat: not in enabled drivers build config 00:07:23.692 dma/skeleton: not in enabled drivers build config 00:07:23.692 net/af_packet: not in enabled drivers build config 00:07:23.692 net/af_xdp: not in enabled drivers build config 00:07:23.692 net/ark: not in enabled drivers build config 00:07:23.692 net/atlantic: not in enabled drivers build config 00:07:23.692 net/avp: not in enabled drivers build config 00:07:23.692 net/axgbe: not in enabled drivers build config 00:07:23.692 net/bnx2x: not in enabled drivers build config 00:07:23.692 net/bnxt: not in enabled drivers build config 00:07:23.692 net/bonding: not in enabled drivers build config 00:07:23.692 net/cnxk: not in enabled drivers build config 00:07:23.692 net/cpfl: not in enabled drivers build config 00:07:23.692 net/cxgbe: not in enabled drivers build config 00:07:23.692 net/dpaa: not in enabled drivers build config 00:07:23.692 net/dpaa2: not in enabled drivers build config 00:07:23.692 net/e1000: not in enabled drivers build config 00:07:23.692 net/ena: not in enabled drivers build config 00:07:23.692 net/enetc: not in enabled drivers build config 00:07:23.692 net/enetfec: not in enabled drivers build config 00:07:23.692 net/enic: not in enabled drivers build config 00:07:23.692 net/failsafe: not in enabled drivers build config 00:07:23.692 net/fm10k: not in enabled drivers build config 00:07:23.692 net/gve: not in enabled drivers build config 00:07:23.692 net/hinic: not in enabled drivers build config 00:07:23.692 net/hns3: not in enabled drivers build config 00:07:23.692 net/i40e: not in enabled drivers build config 00:07:23.692 net/iavf: not in enabled drivers build config 00:07:23.692 net/ice: not in enabled drivers build config 00:07:23.692 net/idpf: not in enabled drivers build config 00:07:23.692 net/igc: not in enabled drivers build config 00:07:23.692 net/ionic: not in enabled drivers build config 00:07:23.692 net/ipn3ke: not in enabled drivers build config 00:07:23.692 net/ixgbe: not in enabled drivers build config 00:07:23.692 net/mana: not in enabled drivers build config 00:07:23.692 net/memif: not in enabled drivers build config 00:07:23.692 net/mlx4: not in enabled drivers build config 00:07:23.692 net/mlx5: not in enabled drivers build config 00:07:23.692 net/mvneta: not in enabled drivers build config 00:07:23.692 net/mvpp2: not in enabled drivers build config 00:07:23.692 net/netvsc: not in enabled drivers build config 00:07:23.692 net/nfb: not in enabled drivers build config 00:07:23.692 net/nfp: not in enabled drivers build config 00:07:23.692 net/ngbe: not in enabled drivers build config 00:07:23.692 net/null: not in enabled drivers build config 00:07:23.692 net/octeontx: not in enabled drivers build config 00:07:23.692 net/octeon_ep: not in enabled drivers build config 00:07:23.692 net/pcap: not in enabled drivers build config 00:07:23.692 net/pfe: not in enabled drivers build config 00:07:23.692 net/qede: not in enabled drivers build config 00:07:23.692 net/ring: not in enabled drivers build config 00:07:23.692 net/sfc: not in enabled drivers build config 00:07:23.692 net/softnic: not in enabled drivers build config 00:07:23.692 net/tap: not in enabled drivers build config 00:07:23.692 net/thunderx: not in enabled drivers build config 00:07:23.692 net/txgbe: not in enabled drivers build config 00:07:23.692 net/vdev_netvsc: not in enabled drivers build config 00:07:23.692 net/vhost: not in enabled drivers build config 00:07:23.692 net/virtio: not in enabled drivers build config 00:07:23.692 net/vmxnet3: not in enabled drivers build config 00:07:23.692 raw/*: missing internal dependency, "rawdev" 00:07:23.692 crypto/armv8: not in enabled drivers build config 00:07:23.692 crypto/bcmfs: not in enabled drivers build config 00:07:23.692 crypto/caam_jr: not in enabled drivers build config 00:07:23.692 crypto/ccp: not in enabled drivers build config 00:07:23.692 crypto/cnxk: not in enabled drivers build config 00:07:23.692 crypto/dpaa_sec: not in enabled drivers build config 00:07:23.692 crypto/dpaa2_sec: not in enabled drivers build config 00:07:23.692 crypto/ipsec_mb: not in enabled drivers build config 00:07:23.692 crypto/mlx5: not in enabled drivers build config 00:07:23.692 crypto/mvsam: not in enabled drivers build config 00:07:23.692 crypto/nitrox: not in enabled drivers build config 00:07:23.692 crypto/null: not in enabled drivers build config 00:07:23.692 crypto/octeontx: not in enabled drivers build config 00:07:23.692 crypto/openssl: not in enabled drivers build config 00:07:23.692 crypto/scheduler: not in enabled drivers build config 00:07:23.692 crypto/uadk: not in enabled drivers build config 00:07:23.692 crypto/virtio: not in enabled drivers build config 00:07:23.692 compress/isal: not in enabled drivers build config 00:07:23.692 compress/mlx5: not in enabled drivers build config 00:07:23.692 compress/nitrox: not in enabled drivers build config 00:07:23.692 compress/octeontx: not in enabled drivers build config 00:07:23.692 compress/zlib: not in enabled drivers build config 00:07:23.692 regex/*: missing internal dependency, "regexdev" 00:07:23.692 ml/*: missing internal dependency, "mldev" 00:07:23.692 vdpa/ifc: not in enabled drivers build config 00:07:23.692 vdpa/mlx5: not in enabled drivers build config 00:07:23.692 vdpa/nfp: not in enabled drivers build config 00:07:23.692 vdpa/sfc: not in enabled drivers build config 00:07:23.692 event/*: missing internal dependency, "eventdev" 00:07:23.692 baseband/*: missing internal dependency, "bbdev" 00:07:23.692 gpu/*: missing internal dependency, "gpudev" 00:07:23.692 00:07:23.692 00:07:23.692 Build targets in project: 85 00:07:23.692 00:07:23.692 DPDK 24.03.0 00:07:23.692 00:07:23.692 User defined options 00:07:23.692 buildtype : debug 00:07:23.692 default_library : static 00:07:23.692 libdir : lib 00:07:23.692 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:23.692 c_args : -fPIC -Werror 00:07:23.692 c_link_args : 00:07:23.692 cpu_instruction_set: native 00:07:23.692 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:07:23.692 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:07:23.692 enable_docs : false 00:07:23.692 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:23.692 enable_kmods : false 00:07:23.692 max_lcores : 128 00:07:23.692 tests : false 00:07:23.692 00:07:23.692 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:23.692 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:07:23.958 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:23.958 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:23.958 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:23.958 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:23.958 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:23.958 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:23.958 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:23.958 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:23.958 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:23.958 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:23.958 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:23.958 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:23.958 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:23.958 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:23.958 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:23.958 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:23.958 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:23.958 [18/268] Linking static target lib/librte_kvargs.a 00:07:23.958 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:23.958 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:23.958 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:23.958 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:23.958 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:23.958 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:23.958 [25/268] Linking static target lib/librte_log.a 00:07:23.958 [26/268] Linking static target lib/librte_pci.a 00:07:23.958 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:23.958 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:23.958 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:23.958 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:23.958 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:23.958 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:23.958 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:24.218 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:24.218 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:24.476 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:24.476 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:24.476 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:24.476 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:24.476 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:24.476 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:24.476 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:24.476 [43/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.476 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:24.476 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:24.476 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:24.476 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:24.476 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:24.476 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:24.476 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:24.476 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:24.476 [52/268] Linking static target lib/librte_meter.a 00:07:24.476 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:24.476 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:24.476 [55/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:24.476 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:24.476 [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:24.476 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:24.476 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:24.476 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:24.476 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:24.476 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:24.476 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:24.476 [64/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:24.477 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:24.477 [66/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:24.477 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:24.477 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:24.477 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:24.477 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:24.477 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:24.477 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:24.477 [73/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:24.477 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:24.477 [75/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:24.477 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:24.477 [77/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:24.477 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:24.477 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:24.477 [80/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:24.477 [81/268] Linking static target lib/librte_telemetry.a 00:07:24.477 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:24.477 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:24.477 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:24.477 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:24.477 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:24.477 [87/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:24.477 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:24.477 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:24.477 [90/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.477 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:24.477 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:24.477 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:24.477 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:24.477 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:24.477 [96/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:24.477 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:24.477 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:24.477 [99/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:24.477 [100/268] Linking static target lib/librte_ring.a 00:07:24.477 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:24.477 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:24.477 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:24.477 [104/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:24.477 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:24.477 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:24.477 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:24.477 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:24.477 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:24.477 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:24.477 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:24.477 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:24.477 [113/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:24.477 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:24.477 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:24.477 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:24.477 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:24.477 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:24.477 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:24.735 [120/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:24.735 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:24.735 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:24.735 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:24.735 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:24.735 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:24.735 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:24.735 [127/268] Linking static target lib/librte_cmdline.a 00:07:24.735 [128/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:24.735 [129/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:24.735 [130/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:24.735 [131/268] Linking static target lib/librte_mempool.a 00:07:24.735 [132/268] Linking static target lib/librte_net.a 00:07:24.735 [133/268] Linking static target lib/librte_eal.a 00:07:24.735 [134/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:24.735 [135/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:24.735 [136/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:24.735 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:24.735 [138/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:24.735 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:24.735 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:24.735 [141/268] Linking static target lib/librte_timer.a 00:07:24.735 [142/268] Linking static target lib/librte_rcu.a 00:07:24.735 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:24.735 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:24.735 [145/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:24.735 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:24.735 [147/268] Linking static target lib/librte_mbuf.a 00:07:24.735 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:24.735 [149/268] Linking static target lib/librte_dmadev.a 00:07:24.735 [150/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.735 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:24.735 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:24.735 [153/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.735 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:24.735 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:24.735 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:24.735 [157/268] Linking target lib/librte_log.so.24.1 00:07:24.735 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:24.735 [159/268] Linking static target lib/librte_compressdev.a 00:07:24.735 [160/268] Linking static target lib/librte_hash.a 00:07:24.735 [161/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.735 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:24.735 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:24.993 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:24.993 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:24.993 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:24.993 [167/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.993 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:24.993 [169/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:24.993 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:24.993 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:24.993 [172/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.993 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:24.994 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:24.994 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:24.994 [176/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.994 [177/268] Linking static target lib/librte_power.a 00:07:24.994 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:24.994 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:24.994 [180/268] Linking target lib/librte_kvargs.so.24.1 00:07:24.994 [181/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:24.994 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:24.994 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:24.994 [184/268] Linking target lib/librte_telemetry.so.24.1 00:07:24.994 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:24.994 [186/268] Linking static target lib/librte_reorder.a 00:07:24.994 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:24.994 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:24.994 [189/268] Linking static target lib/librte_security.a 00:07:24.994 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:24.994 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:24.994 [192/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.252 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:25.252 [194/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:25.252 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:25.252 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:25.252 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:25.252 [198/268] Linking static target lib/librte_cryptodev.a 00:07:25.252 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:25.252 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:25.252 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:25.252 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:25.252 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:25.252 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:25.252 [205/268] Linking static target drivers/librte_bus_vdev.a 00:07:25.252 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:25.252 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:25.252 [208/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.252 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:25.252 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:25.252 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:25.252 [212/268] Linking static target drivers/librte_bus_pci.a 00:07:25.252 [213/268] Linking static target lib/librte_ethdev.a 00:07:25.512 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.512 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:25.512 [216/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.512 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:25.512 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:25.512 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.512 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.512 [221/268] Linking static target drivers/librte_mempool_ring.a 00:07:25.512 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.770 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.771 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.771 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.771 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.029 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.029 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:26.029 [229/268] Linking static target lib/librte_vhost.a 00:07:26.964 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.900 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.166 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.109 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.109 [234/268] Linking target lib/librte_eal.so.24.1 00:07:34.109 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:34.109 [236/268] Linking target lib/librte_pci.so.24.1 00:07:34.109 [237/268] Linking target lib/librte_dmadev.so.24.1 00:07:34.109 [238/268] Linking target lib/librte_meter.so.24.1 00:07:34.109 [239/268] Linking target lib/librte_timer.so.24.1 00:07:34.109 [240/268] Linking target lib/librte_ring.so.24.1 00:07:34.109 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:34.368 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:34.368 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:34.368 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:34.368 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:34.368 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:34.368 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:34.368 [248/268] Linking target lib/librte_mempool.so.24.1 00:07:34.368 [249/268] Linking target lib/librte_rcu.so.24.1 00:07:34.627 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:34.627 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:34.627 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:34.627 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:34.887 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:34.887 [255/268] Linking target lib/librte_compressdev.so.24.1 00:07:34.887 [256/268] Linking target lib/librte_reorder.so.24.1 00:07:34.887 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:07:34.887 [258/268] Linking target lib/librte_net.so.24.1 00:07:34.887 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:34.887 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:35.146 [261/268] Linking target lib/librte_hash.so.24.1 00:07:35.146 [262/268] Linking target lib/librte_cmdline.so.24.1 00:07:35.146 [263/268] Linking target lib/librte_security.so.24.1 00:07:35.146 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:35.146 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:35.146 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:35.405 [267/268] Linking target lib/librte_power.so.24.1 00:07:35.405 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:35.405 INFO: autodetecting backend as ninja 00:07:35.405 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:07:36.341 CC lib/ut/ut.o 00:07:36.341 CC lib/ut_mock/mock.o 00:07:36.341 CC lib/log/log.o 00:07:36.341 CC lib/log/log_flags.o 00:07:36.341 CC lib/log/log_deprecated.o 00:07:36.600 LIB libspdk_log.a 00:07:36.600 LIB libspdk_ut.a 00:07:36.600 LIB libspdk_ut_mock.a 00:07:36.859 CC lib/dma/dma.o 00:07:36.859 CC lib/util/bit_array.o 00:07:36.859 CC lib/util/base64.o 00:07:36.859 CC lib/util/cpuset.o 00:07:36.859 CC lib/util/crc16.o 00:07:36.859 CC lib/util/crc32.o 00:07:36.859 CC lib/util/crc32c.o 00:07:36.859 CC lib/ioat/ioat.o 00:07:36.859 CC lib/util/crc32_ieee.o 00:07:36.859 CC lib/util/crc64.o 00:07:36.859 CC lib/util/dif.o 00:07:36.859 CC lib/util/file.o 00:07:36.859 CC lib/util/fd.o 00:07:36.859 CXX lib/trace_parser/trace.o 00:07:36.859 CC lib/util/fd_group.o 00:07:36.859 CC lib/util/math.o 00:07:36.859 CC lib/util/hexlify.o 00:07:36.859 CC lib/util/iov.o 00:07:36.859 CC lib/util/net.o 00:07:36.859 CC lib/util/pipe.o 00:07:36.859 CC lib/util/strerror_tls.o 00:07:36.859 CC lib/util/xor.o 00:07:36.859 CC lib/util/uuid.o 00:07:36.859 CC lib/util/string.o 00:07:36.859 CC lib/util/zipf.o 00:07:36.859 CC lib/util/md5.o 00:07:36.859 CC lib/vfio_user/host/vfio_user_pci.o 00:07:36.859 CC lib/vfio_user/host/vfio_user.o 00:07:36.859 LIB libspdk_dma.a 00:07:36.859 LIB libspdk_ioat.a 00:07:37.118 LIB libspdk_vfio_user.a 00:07:37.118 LIB libspdk_util.a 00:07:37.376 LIB libspdk_trace_parser.a 00:07:37.376 CC lib/json/json_parse.o 00:07:37.376 CC lib/vmd/led.o 00:07:37.376 CC lib/conf/conf.o 00:07:37.376 CC lib/json/json_util.o 00:07:37.376 CC lib/vmd/vmd.o 00:07:37.376 CC lib/json/json_write.o 00:07:37.376 CC lib/env_dpdk/env.o 00:07:37.376 CC lib/env_dpdk/memory.o 00:07:37.376 CC lib/env_dpdk/pci.o 00:07:37.376 CC lib/env_dpdk/init.o 00:07:37.376 CC lib/env_dpdk/pci_virtio.o 00:07:37.376 CC lib/env_dpdk/threads.o 00:07:37.376 CC lib/env_dpdk/pci_ioat.o 00:07:37.376 CC lib/rdma_utils/rdma_utils.o 00:07:37.376 CC lib/env_dpdk/pci_vmd.o 00:07:37.376 CC lib/env_dpdk/pci_idxd.o 00:07:37.376 CC lib/env_dpdk/pci_event.o 00:07:37.376 CC lib/env_dpdk/sigbus_handler.o 00:07:37.376 CC lib/idxd/idxd.o 00:07:37.376 CC lib/env_dpdk/pci_dpdk.o 00:07:37.376 CC lib/idxd/idxd_user.o 00:07:37.376 CC lib/idxd/idxd_kernel.o 00:07:37.376 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:37.376 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:37.634 LIB libspdk_conf.a 00:07:37.634 LIB libspdk_rdma_utils.a 00:07:37.634 LIB libspdk_json.a 00:07:37.893 LIB libspdk_idxd.a 00:07:37.893 LIB libspdk_vmd.a 00:07:37.893 CC lib/jsonrpc/jsonrpc_server.o 00:07:37.893 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:37.893 CC lib/jsonrpc/jsonrpc_client.o 00:07:37.893 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:37.893 CC lib/rdma_provider/common.o 00:07:37.893 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:38.152 LIB libspdk_rdma_provider.a 00:07:38.152 LIB libspdk_jsonrpc.a 00:07:38.411 LIB libspdk_env_dpdk.a 00:07:38.411 CC lib/rpc/rpc.o 00:07:38.670 LIB libspdk_rpc.a 00:07:38.928 CC lib/keyring/keyring.o 00:07:38.928 CC lib/keyring/keyring_rpc.o 00:07:38.928 CC lib/trace/trace.o 00:07:38.928 CC lib/trace/trace_flags.o 00:07:38.928 CC lib/trace/trace_rpc.o 00:07:38.928 CC lib/notify/notify.o 00:07:38.928 CC lib/notify/notify_rpc.o 00:07:38.928 LIB libspdk_notify.a 00:07:38.928 LIB libspdk_trace.a 00:07:39.187 LIB libspdk_keyring.a 00:07:39.445 CC lib/sock/sock.o 00:07:39.445 CC lib/sock/sock_rpc.o 00:07:39.445 CC lib/thread/thread.o 00:07:39.445 CC lib/thread/iobuf.o 00:07:39.704 LIB libspdk_sock.a 00:07:39.963 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:39.963 CC lib/nvme/nvme_ctrlr.o 00:07:39.963 CC lib/nvme/nvme_fabric.o 00:07:39.963 CC lib/nvme/nvme_ns_cmd.o 00:07:39.963 CC lib/nvme/nvme_ns.o 00:07:39.963 CC lib/nvme/nvme_qpair.o 00:07:39.963 CC lib/nvme/nvme_pcie_common.o 00:07:39.963 CC lib/nvme/nvme_pcie.o 00:07:39.963 CC lib/nvme/nvme.o 00:07:39.963 CC lib/nvme/nvme_quirks.o 00:07:39.963 CC lib/nvme/nvme_transport.o 00:07:39.963 CC lib/nvme/nvme_discovery.o 00:07:39.963 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:39.963 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:39.963 CC lib/nvme/nvme_tcp.o 00:07:39.963 CC lib/nvme/nvme_opal.o 00:07:39.963 CC lib/nvme/nvme_io_msg.o 00:07:39.963 CC lib/nvme/nvme_poll_group.o 00:07:39.963 CC lib/nvme/nvme_zns.o 00:07:39.963 CC lib/nvme/nvme_stubs.o 00:07:39.963 CC lib/nvme/nvme_vfio_user.o 00:07:39.963 CC lib/nvme/nvme_auth.o 00:07:39.963 CC lib/nvme/nvme_cuse.o 00:07:39.963 CC lib/nvme/nvme_rdma.o 00:07:40.222 LIB libspdk_thread.a 00:07:40.481 CC lib/blob/blobstore.o 00:07:40.481 CC lib/blob/request.o 00:07:40.481 CC lib/blob/zeroes.o 00:07:40.481 CC lib/blob/blob_bs_dev.o 00:07:40.481 CC lib/init/subsystem.o 00:07:40.481 CC lib/init/json_config.o 00:07:40.481 CC lib/init/subsystem_rpc.o 00:07:40.481 CC lib/init/rpc.o 00:07:40.481 CC lib/virtio/virtio.o 00:07:40.481 CC lib/accel/accel.o 00:07:40.481 CC lib/accel/accel_rpc.o 00:07:40.481 CC lib/accel/accel_sw.o 00:07:40.481 CC lib/virtio/virtio_vhost_user.o 00:07:40.481 CC lib/virtio/virtio_vfio_user.o 00:07:40.481 CC lib/virtio/virtio_pci.o 00:07:40.481 CC lib/fsdev/fsdev.o 00:07:40.481 CC lib/vfu_tgt/tgt_endpoint.o 00:07:40.481 CC lib/fsdev/fsdev_io.o 00:07:40.481 CC lib/vfu_tgt/tgt_rpc.o 00:07:40.481 CC lib/fsdev/fsdev_rpc.o 00:07:40.740 LIB libspdk_init.a 00:07:40.740 LIB libspdk_virtio.a 00:07:40.740 LIB libspdk_vfu_tgt.a 00:07:40.999 LIB libspdk_fsdev.a 00:07:40.999 CC lib/event/app.o 00:07:40.999 CC lib/event/reactor.o 00:07:40.999 CC lib/event/log_rpc.o 00:07:40.999 CC lib/event/app_rpc.o 00:07:40.999 CC lib/event/scheduler_static.o 00:07:41.258 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:41.258 LIB libspdk_event.a 00:07:41.517 LIB libspdk_nvme.a 00:07:41.517 LIB libspdk_accel.a 00:07:41.776 LIB libspdk_fuse_dispatcher.a 00:07:41.776 CC lib/bdev/bdev.o 00:07:41.776 CC lib/bdev/part.o 00:07:41.776 CC lib/bdev/bdev_rpc.o 00:07:41.776 CC lib/bdev/bdev_zone.o 00:07:41.776 CC lib/bdev/scsi_nvme.o 00:07:42.713 LIB libspdk_blob.a 00:07:42.972 CC lib/blobfs/blobfs.o 00:07:42.972 CC lib/blobfs/tree.o 00:07:42.972 CC lib/lvol/lvol.o 00:07:43.540 LIB libspdk_blobfs.a 00:07:43.540 LIB libspdk_lvol.a 00:07:44.108 LIB libspdk_bdev.a 00:07:44.367 CC lib/ublk/ublk.o 00:07:44.367 CC lib/ublk/ublk_rpc.o 00:07:44.367 CC lib/nbd/nbd.o 00:07:44.367 CC lib/nbd/nbd_rpc.o 00:07:44.367 CC lib/ftl/ftl_core.o 00:07:44.367 CC lib/ftl/ftl_debug.o 00:07:44.367 CC lib/ftl/ftl_init.o 00:07:44.367 CC lib/ftl/ftl_layout.o 00:07:44.367 CC lib/ftl/ftl_io.o 00:07:44.367 CC lib/ftl/ftl_sb.o 00:07:44.367 CC lib/scsi/dev.o 00:07:44.367 CC lib/nvmf/ctrlr.o 00:07:44.367 CC lib/ftl/ftl_l2p.o 00:07:44.367 CC lib/scsi/lun.o 00:07:44.367 CC lib/nvmf/ctrlr_discovery.o 00:07:44.367 CC lib/ftl/ftl_l2p_flat.o 00:07:44.367 CC lib/scsi/port.o 00:07:44.367 CC lib/nvmf/ctrlr_bdev.o 00:07:44.367 CC lib/ftl/ftl_nv_cache.o 00:07:44.367 CC lib/scsi/scsi.o 00:07:44.367 CC lib/nvmf/subsystem.o 00:07:44.367 CC lib/ftl/ftl_band.o 00:07:44.367 CC lib/scsi/scsi_bdev.o 00:07:44.367 CC lib/ftl/ftl_band_ops.o 00:07:44.367 CC lib/nvmf/nvmf.o 00:07:44.367 CC lib/scsi/scsi_pr.o 00:07:44.367 CC lib/ftl/ftl_writer.o 00:07:44.367 CC lib/nvmf/transport.o 00:07:44.367 CC lib/nvmf/nvmf_rpc.o 00:07:44.367 CC lib/scsi/scsi_rpc.o 00:07:44.367 CC lib/scsi/task.o 00:07:44.367 CC lib/ftl/ftl_rq.o 00:07:44.367 CC lib/ftl/ftl_reloc.o 00:07:44.367 CC lib/ftl/ftl_p2l.o 00:07:44.367 CC lib/ftl/ftl_l2p_cache.o 00:07:44.367 CC lib/nvmf/tcp.o 00:07:44.367 CC lib/nvmf/stubs.o 00:07:44.367 CC lib/nvmf/mdns_server.o 00:07:44.367 CC lib/nvmf/vfio_user.o 00:07:44.367 CC lib/ftl/ftl_p2l_log.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt.o 00:07:44.367 CC lib/nvmf/rdma.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:44.367 CC lib/nvmf/auth.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:44.367 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:44.367 CC lib/ftl/utils/ftl_md.o 00:07:44.367 CC lib/ftl/utils/ftl_mempool.o 00:07:44.367 CC lib/ftl/utils/ftl_conf.o 00:07:44.367 CC lib/ftl/utils/ftl_bitmap.o 00:07:44.367 CC lib/ftl/utils/ftl_property.o 00:07:44.367 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:44.367 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:44.367 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:44.367 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:44.367 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:44.367 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:44.367 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:44.367 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:44.367 CC lib/ftl/base/ftl_base_dev.o 00:07:44.367 CC lib/ftl/base/ftl_base_bdev.o 00:07:44.367 CC lib/ftl/ftl_trace.o 00:07:44.626 LIB libspdk_nbd.a 00:07:44.884 LIB libspdk_scsi.a 00:07:44.884 LIB libspdk_ublk.a 00:07:45.142 CC lib/iscsi/conn.o 00:07:45.142 CC lib/iscsi/init_grp.o 00:07:45.142 CC lib/iscsi/portal_grp.o 00:07:45.142 CC lib/iscsi/iscsi.o 00:07:45.142 CC lib/iscsi/tgt_node.o 00:07:45.142 CC lib/iscsi/param.o 00:07:45.142 CC lib/iscsi/iscsi_subsystem.o 00:07:45.142 CC lib/iscsi/iscsi_rpc.o 00:07:45.142 CC lib/iscsi/task.o 00:07:45.142 CC lib/vhost/vhost.o 00:07:45.142 CC lib/vhost/vhost_rpc.o 00:07:45.142 CC lib/vhost/vhost_scsi.o 00:07:45.142 CC lib/vhost/vhost_blk.o 00:07:45.142 CC lib/vhost/rte_vhost_user.o 00:07:45.142 LIB libspdk_ftl.a 00:07:45.710 LIB libspdk_iscsi.a 00:07:45.710 LIB libspdk_vhost.a 00:07:45.969 LIB libspdk_nvmf.a 00:07:46.228 CC module/vfu_device/vfu_virtio.o 00:07:46.228 CC module/vfu_device/vfu_virtio_scsi.o 00:07:46.228 CC module/vfu_device/vfu_virtio_blk.o 00:07:46.228 CC module/vfu_device/vfu_virtio_rpc.o 00:07:46.228 CC module/vfu_device/vfu_virtio_fs.o 00:07:46.228 CC module/env_dpdk/env_dpdk_rpc.o 00:07:46.228 CC module/sock/posix/posix.o 00:07:46.486 CC module/accel/error/accel_error.o 00:07:46.486 CC module/accel/error/accel_error_rpc.o 00:07:46.486 CC module/fsdev/aio/linux_aio_mgr.o 00:07:46.486 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:46.486 CC module/fsdev/aio/fsdev_aio.o 00:07:46.486 CC module/accel/iaa/accel_iaa.o 00:07:46.486 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:46.486 CC module/accel/iaa/accel_iaa_rpc.o 00:07:46.486 CC module/blob/bdev/blob_bdev.o 00:07:46.486 LIB libspdk_env_dpdk_rpc.a 00:07:46.486 CC module/accel/dsa/accel_dsa.o 00:07:46.486 CC module/accel/ioat/accel_ioat.o 00:07:46.486 CC module/accel/ioat/accel_ioat_rpc.o 00:07:46.486 CC module/keyring/linux/keyring.o 00:07:46.486 CC module/accel/dsa/accel_dsa_rpc.o 00:07:46.486 CC module/keyring/file/keyring.o 00:07:46.486 CC module/keyring/file/keyring_rpc.o 00:07:46.486 CC module/keyring/linux/keyring_rpc.o 00:07:46.486 CC module/scheduler/gscheduler/gscheduler.o 00:07:46.486 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:46.486 LIB libspdk_accel_iaa.a 00:07:46.486 LIB libspdk_scheduler_dpdk_governor.a 00:07:46.486 LIB libspdk_scheduler_gscheduler.a 00:07:46.486 LIB libspdk_keyring_linux.a 00:07:46.486 LIB libspdk_keyring_file.a 00:07:46.486 LIB libspdk_accel_error.a 00:07:46.486 LIB libspdk_accel_ioat.a 00:07:46.486 LIB libspdk_scheduler_dynamic.a 00:07:46.486 LIB libspdk_blob_bdev.a 00:07:46.743 LIB libspdk_vfu_device.a 00:07:46.743 LIB libspdk_accel_dsa.a 00:07:47.001 LIB libspdk_sock_posix.a 00:07:47.001 LIB libspdk_fsdev_aio.a 00:07:47.001 CC module/bdev/null/bdev_null.o 00:07:47.001 CC module/bdev/null/bdev_null_rpc.o 00:07:47.001 CC module/blobfs/bdev/blobfs_bdev.o 00:07:47.001 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:47.001 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:47.001 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:47.001 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:47.001 CC module/bdev/aio/bdev_aio.o 00:07:47.001 CC module/bdev/aio/bdev_aio_rpc.o 00:07:47.001 CC module/bdev/raid/bdev_raid.o 00:07:47.001 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:47.001 CC module/bdev/nvme/bdev_nvme.o 00:07:47.001 CC module/bdev/nvme/nvme_rpc.o 00:07:47.001 CC module/bdev/raid/bdev_raid_rpc.o 00:07:47.001 CC module/bdev/error/vbdev_error_rpc.o 00:07:47.001 CC module/bdev/error/vbdev_error.o 00:07:47.001 CC module/bdev/lvol/vbdev_lvol.o 00:07:47.001 CC module/bdev/nvme/bdev_mdns_client.o 00:07:47.001 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:47.001 CC module/bdev/raid/bdev_raid_sb.o 00:07:47.001 CC module/bdev/raid/raid0.o 00:07:47.001 CC module/bdev/raid/raid1.o 00:07:47.001 CC module/bdev/malloc/bdev_malloc.o 00:07:47.001 CC module/bdev/nvme/vbdev_opal.o 00:07:47.001 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:47.001 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:47.001 CC module/bdev/gpt/gpt.o 00:07:47.001 CC module/bdev/raid/concat.o 00:07:47.001 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:47.001 CC module/bdev/passthru/vbdev_passthru.o 00:07:47.001 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:47.001 CC module/bdev/gpt/vbdev_gpt.o 00:07:47.001 CC module/bdev/delay/vbdev_delay.o 00:07:47.001 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:47.001 CC module/bdev/ftl/bdev_ftl.o 00:07:47.001 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:47.001 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:47.001 CC module/bdev/iscsi/bdev_iscsi.o 00:07:47.001 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:47.001 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:47.001 CC module/bdev/split/vbdev_split.o 00:07:47.001 CC module/bdev/split/vbdev_split_rpc.o 00:07:47.259 LIB libspdk_blobfs_bdev.a 00:07:47.260 LIB libspdk_bdev_split.a 00:07:47.260 LIB libspdk_bdev_null.a 00:07:47.260 LIB libspdk_bdev_aio.a 00:07:47.260 LIB libspdk_bdev_error.a 00:07:47.260 LIB libspdk_bdev_gpt.a 00:07:47.260 LIB libspdk_bdev_ftl.a 00:07:47.260 LIB libspdk_bdev_passthru.a 00:07:47.260 LIB libspdk_bdev_zone_block.a 00:07:47.260 LIB libspdk_bdev_iscsi.a 00:07:47.260 LIB libspdk_bdev_delay.a 00:07:47.260 LIB libspdk_bdev_malloc.a 00:07:47.260 LIB libspdk_bdev_virtio.a 00:07:47.518 LIB libspdk_bdev_lvol.a 00:07:47.777 LIB libspdk_bdev_raid.a 00:07:48.714 LIB libspdk_bdev_nvme.a 00:07:49.282 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:49.282 CC module/event/subsystems/iobuf/iobuf.o 00:07:49.282 CC module/event/subsystems/vmd/vmd.o 00:07:49.282 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:49.282 CC module/event/subsystems/fsdev/fsdev.o 00:07:49.282 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:49.282 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:49.282 CC module/event/subsystems/keyring/keyring.o 00:07:49.282 CC module/event/subsystems/sock/sock.o 00:07:49.282 CC module/event/subsystems/scheduler/scheduler.o 00:07:49.541 LIB libspdk_event_iobuf.a 00:07:49.541 LIB libspdk_event_vhost_blk.a 00:07:49.541 LIB libspdk_event_fsdev.a 00:07:49.541 LIB libspdk_event_vmd.a 00:07:49.541 LIB libspdk_event_vfu_tgt.a 00:07:49.541 LIB libspdk_event_keyring.a 00:07:49.541 LIB libspdk_event_scheduler.a 00:07:49.541 LIB libspdk_event_sock.a 00:07:49.800 CC module/event/subsystems/accel/accel.o 00:07:49.801 LIB libspdk_event_accel.a 00:07:50.060 CC module/event/subsystems/bdev/bdev.o 00:07:50.319 LIB libspdk_event_bdev.a 00:07:50.578 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:50.578 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:50.578 CC module/event/subsystems/scsi/scsi.o 00:07:50.578 CC module/event/subsystems/ublk/ublk.o 00:07:50.578 CC module/event/subsystems/nbd/nbd.o 00:07:50.578 LIB libspdk_event_ublk.a 00:07:50.578 LIB libspdk_event_nbd.a 00:07:50.578 LIB libspdk_event_scsi.a 00:07:50.578 LIB libspdk_event_nvmf.a 00:07:50.837 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:50.837 CC module/event/subsystems/iscsi/iscsi.o 00:07:50.837 LIB libspdk_event_vhost_scsi.a 00:07:51.097 LIB libspdk_event_iscsi.a 00:07:51.359 CC app/spdk_nvme_identify/identify.o 00:07:51.359 CC app/spdk_lspci/spdk_lspci.o 00:07:51.359 CC app/spdk_top/spdk_top.o 00:07:51.359 CC app/spdk_nvme_perf/perf.o 00:07:51.359 CXX app/trace/trace.o 00:07:51.359 CC app/spdk_nvme_discover/discovery_aer.o 00:07:51.359 CC app/trace_record/trace_record.o 00:07:51.359 TEST_HEADER include/spdk/accel.h 00:07:51.359 TEST_HEADER include/spdk/assert.h 00:07:51.359 TEST_HEADER include/spdk/accel_module.h 00:07:51.359 TEST_HEADER include/spdk/bdev.h 00:07:51.359 TEST_HEADER include/spdk/base64.h 00:07:51.359 CC test/rpc_client/rpc_client_test.o 00:07:51.359 TEST_HEADER include/spdk/barrier.h 00:07:51.359 TEST_HEADER include/spdk/bdev_zone.h 00:07:51.359 TEST_HEADER include/spdk/bdev_module.h 00:07:51.359 TEST_HEADER include/spdk/bit_array.h 00:07:51.359 TEST_HEADER include/spdk/bit_pool.h 00:07:51.359 TEST_HEADER include/spdk/blob_bdev.h 00:07:51.359 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:51.359 TEST_HEADER include/spdk/blobfs.h 00:07:51.359 TEST_HEADER include/spdk/blob.h 00:07:51.359 TEST_HEADER include/spdk/conf.h 00:07:51.359 TEST_HEADER include/spdk/config.h 00:07:51.359 TEST_HEADER include/spdk/cpuset.h 00:07:51.359 TEST_HEADER include/spdk/crc16.h 00:07:51.359 TEST_HEADER include/spdk/crc32.h 00:07:51.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:51.359 TEST_HEADER include/spdk/crc64.h 00:07:51.359 TEST_HEADER include/spdk/dif.h 00:07:51.359 TEST_HEADER include/spdk/dma.h 00:07:51.359 TEST_HEADER include/spdk/env.h 00:07:51.359 TEST_HEADER include/spdk/endian.h 00:07:51.359 TEST_HEADER include/spdk/env_dpdk.h 00:07:51.359 TEST_HEADER include/spdk/event.h 00:07:51.359 TEST_HEADER include/spdk/fd_group.h 00:07:51.359 TEST_HEADER include/spdk/fd.h 00:07:51.359 TEST_HEADER include/spdk/fsdev_module.h 00:07:51.359 TEST_HEADER include/spdk/file.h 00:07:51.359 TEST_HEADER include/spdk/fsdev.h 00:07:51.359 TEST_HEADER include/spdk/ftl.h 00:07:51.359 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:51.359 TEST_HEADER include/spdk/gpt_spec.h 00:07:51.359 TEST_HEADER include/spdk/hexlify.h 00:07:51.359 TEST_HEADER include/spdk/histogram_data.h 00:07:51.359 TEST_HEADER include/spdk/idxd.h 00:07:51.359 TEST_HEADER include/spdk/idxd_spec.h 00:07:51.359 TEST_HEADER include/spdk/ioat.h 00:07:51.359 TEST_HEADER include/spdk/init.h 00:07:51.359 TEST_HEADER include/spdk/json.h 00:07:51.359 TEST_HEADER include/spdk/ioat_spec.h 00:07:51.359 TEST_HEADER include/spdk/iscsi_spec.h 00:07:51.359 TEST_HEADER include/spdk/jsonrpc.h 00:07:51.359 CC app/spdk_dd/spdk_dd.o 00:07:51.359 TEST_HEADER include/spdk/likely.h 00:07:51.359 TEST_HEADER include/spdk/keyring.h 00:07:51.359 CC app/nvmf_tgt/nvmf_main.o 00:07:51.359 TEST_HEADER include/spdk/log.h 00:07:51.359 TEST_HEADER include/spdk/keyring_module.h 00:07:51.359 TEST_HEADER include/spdk/lvol.h 00:07:51.359 TEST_HEADER include/spdk/md5.h 00:07:51.359 TEST_HEADER include/spdk/memory.h 00:07:51.359 TEST_HEADER include/spdk/mmio.h 00:07:51.359 TEST_HEADER include/spdk/notify.h 00:07:51.359 TEST_HEADER include/spdk/nbd.h 00:07:51.359 TEST_HEADER include/spdk/nvme.h 00:07:51.359 TEST_HEADER include/spdk/nvme_intel.h 00:07:51.359 TEST_HEADER include/spdk/net.h 00:07:51.359 CC app/iscsi_tgt/iscsi_tgt.o 00:07:51.359 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:51.359 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:51.359 TEST_HEADER include/spdk/nvme_spec.h 00:07:51.359 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:51.359 TEST_HEADER include/spdk/nvme_zns.h 00:07:51.359 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:51.360 TEST_HEADER include/spdk/nvmf.h 00:07:51.360 TEST_HEADER include/spdk/nvmf_spec.h 00:07:51.360 TEST_HEADER include/spdk/nvmf_transport.h 00:07:51.360 TEST_HEADER include/spdk/opal.h 00:07:51.360 TEST_HEADER include/spdk/pci_ids.h 00:07:51.360 TEST_HEADER include/spdk/opal_spec.h 00:07:51.360 TEST_HEADER include/spdk/pipe.h 00:07:51.360 TEST_HEADER include/spdk/queue.h 00:07:51.360 TEST_HEADER include/spdk/reduce.h 00:07:51.360 TEST_HEADER include/spdk/scsi.h 00:07:51.360 TEST_HEADER include/spdk/rpc.h 00:07:51.360 TEST_HEADER include/spdk/scheduler.h 00:07:51.360 TEST_HEADER include/spdk/scsi_spec.h 00:07:51.360 TEST_HEADER include/spdk/stdinc.h 00:07:51.360 TEST_HEADER include/spdk/sock.h 00:07:51.360 TEST_HEADER include/spdk/string.h 00:07:51.360 TEST_HEADER include/spdk/trace.h 00:07:51.360 TEST_HEADER include/spdk/thread.h 00:07:51.360 TEST_HEADER include/spdk/tree.h 00:07:51.360 TEST_HEADER include/spdk/trace_parser.h 00:07:51.360 CC app/spdk_tgt/spdk_tgt.o 00:07:51.360 TEST_HEADER include/spdk/util.h 00:07:51.360 TEST_HEADER include/spdk/version.h 00:07:51.360 TEST_HEADER include/spdk/ublk.h 00:07:51.360 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:51.360 TEST_HEADER include/spdk/uuid.h 00:07:51.360 TEST_HEADER include/spdk/vhost.h 00:07:51.360 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:51.360 TEST_HEADER include/spdk/vmd.h 00:07:51.360 TEST_HEADER include/spdk/xor.h 00:07:51.360 CXX test/cpp_headers/accel.o 00:07:51.360 TEST_HEADER include/spdk/zipf.h 00:07:51.360 CXX test/cpp_headers/assert.o 00:07:51.360 CXX test/cpp_headers/accel_module.o 00:07:51.360 CXX test/cpp_headers/barrier.o 00:07:51.360 CXX test/cpp_headers/bdev.o 00:07:51.360 CXX test/cpp_headers/base64.o 00:07:51.360 CXX test/cpp_headers/bdev_zone.o 00:07:51.360 CXX test/cpp_headers/bdev_module.o 00:07:51.360 CXX test/cpp_headers/bit_pool.o 00:07:51.360 CXX test/cpp_headers/bit_array.o 00:07:51.360 CXX test/cpp_headers/blob_bdev.o 00:07:51.360 CXX test/cpp_headers/blobfs.o 00:07:51.360 CXX test/cpp_headers/blobfs_bdev.o 00:07:51.360 CXX test/cpp_headers/blob.o 00:07:51.360 CXX test/cpp_headers/config.o 00:07:51.360 CXX test/cpp_headers/conf.o 00:07:51.360 CXX test/cpp_headers/crc16.o 00:07:51.360 CXX test/cpp_headers/cpuset.o 00:07:51.360 CXX test/cpp_headers/crc32.o 00:07:51.360 CXX test/cpp_headers/crc64.o 00:07:51.360 CXX test/cpp_headers/dma.o 00:07:51.360 CXX test/cpp_headers/dif.o 00:07:51.360 CXX test/cpp_headers/endian.o 00:07:51.360 CXX test/cpp_headers/env_dpdk.o 00:07:51.360 CXX test/cpp_headers/env.o 00:07:51.360 CXX test/cpp_headers/event.o 00:07:51.360 CXX test/cpp_headers/fd_group.o 00:07:51.360 CXX test/cpp_headers/fd.o 00:07:51.360 CXX test/cpp_headers/file.o 00:07:51.360 CXX test/cpp_headers/fsdev_module.o 00:07:51.360 CXX test/cpp_headers/ftl.o 00:07:51.360 CXX test/cpp_headers/fsdev.o 00:07:51.360 CXX test/cpp_headers/fuse_dispatcher.o 00:07:51.360 CC app/fio/nvme/fio_plugin.o 00:07:51.360 CXX test/cpp_headers/gpt_spec.o 00:07:51.360 LINK spdk_lspci 00:07:51.360 CXX test/cpp_headers/hexlify.o 00:07:51.360 CXX test/cpp_headers/histogram_data.o 00:07:51.360 CXX test/cpp_headers/idxd.o 00:07:51.360 CXX test/cpp_headers/init.o 00:07:51.360 CXX test/cpp_headers/idxd_spec.o 00:07:51.360 CXX test/cpp_headers/ioat.o 00:07:51.360 CXX test/cpp_headers/ioat_spec.o 00:07:51.360 CXX test/cpp_headers/iscsi_spec.o 00:07:51.360 CXX test/cpp_headers/json.o 00:07:51.360 CC examples/ioat/verify/verify.o 00:07:51.360 CXX test/cpp_headers/jsonrpc.o 00:07:51.360 CXX test/cpp_headers/keyring.o 00:07:51.360 CXX test/cpp_headers/keyring_module.o 00:07:51.360 CXX test/cpp_headers/likely.o 00:07:51.360 CXX test/cpp_headers/lvol.o 00:07:51.360 CXX test/cpp_headers/log.o 00:07:51.360 CC test/app/stub/stub.o 00:07:51.360 CXX test/cpp_headers/mmio.o 00:07:51.360 CXX test/cpp_headers/nbd.o 00:07:51.360 CC test/thread/poller_perf/poller_perf.o 00:07:51.360 CXX test/cpp_headers/md5.o 00:07:51.360 CXX test/cpp_headers/net.o 00:07:51.360 CXX test/cpp_headers/nvme.o 00:07:51.360 CC test/thread/lock/spdk_lock.o 00:07:51.360 CXX test/cpp_headers/notify.o 00:07:51.360 CXX test/cpp_headers/memory.o 00:07:51.360 CXX test/cpp_headers/nvme_intel.o 00:07:51.360 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:51.360 CXX test/cpp_headers/nvme_ocssd.o 00:07:51.360 CXX test/cpp_headers/nvme_spec.o 00:07:51.360 CC test/app/histogram_perf/histogram_perf.o 00:07:51.360 CC test/env/memory/memory_ut.o 00:07:51.360 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:51.360 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:51.360 CXX test/cpp_headers/nvmf_cmd.o 00:07:51.360 CXX test/cpp_headers/nvme_zns.o 00:07:51.360 CXX test/cpp_headers/nvmf_transport.o 00:07:51.360 CXX test/cpp_headers/nvmf.o 00:07:51.360 CC test/app/jsoncat/jsoncat.o 00:07:51.360 CXX test/cpp_headers/opal.o 00:07:51.360 CXX test/cpp_headers/opal_spec.o 00:07:51.360 CC test/env/pci/pci_ut.o 00:07:51.360 CC examples/util/zipf/zipf.o 00:07:51.360 CXX test/cpp_headers/nvmf_spec.o 00:07:51.360 CXX test/cpp_headers/queue.o 00:07:51.360 CXX test/cpp_headers/reduce.o 00:07:51.360 CXX test/cpp_headers/pci_ids.o 00:07:51.360 CXX test/cpp_headers/rpc.o 00:07:51.360 CXX test/cpp_headers/scheduler.o 00:07:51.360 CXX test/cpp_headers/scsi.o 00:07:51.360 CXX test/cpp_headers/pipe.o 00:07:51.360 CC test/env/vtophys/vtophys.o 00:07:51.360 CXX test/cpp_headers/scsi_spec.o 00:07:51.360 CXX test/cpp_headers/sock.o 00:07:51.360 CXX test/cpp_headers/stdinc.o 00:07:51.360 LINK rpc_client_test 00:07:51.360 CC examples/ioat/perf/perf.o 00:07:51.360 CXX test/cpp_headers/string.o 00:07:51.360 CC app/fio/bdev/fio_plugin.o 00:07:51.360 CXX test/cpp_headers/thread.o 00:07:51.360 CC test/dma/test_dma/test_dma.o 00:07:51.360 LINK spdk_nvme_discover 00:07:51.360 CC test/app/bdev_svc/bdev_svc.o 00:07:51.619 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:51.619 CC test/env/mem_callbacks/mem_callbacks.o 00:07:51.619 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:51.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:51.619 LINK interrupt_tgt 00:07:51.619 LINK spdk_trace_record 00:07:51.619 CXX test/cpp_headers/trace.o 00:07:51.619 CXX test/cpp_headers/trace_parser.o 00:07:51.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:51.619 LINK jsoncat 00:07:51.619 CXX test/cpp_headers/tree.o 00:07:51.619 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:07:51.619 LINK nvmf_tgt 00:07:51.619 CXX test/cpp_headers/ublk.o 00:07:51.619 LINK iscsi_tgt 00:07:51.619 CXX test/cpp_headers/uuid.o 00:07:51.619 CXX test/cpp_headers/util.o 00:07:51.619 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:07:51.619 CXX test/cpp_headers/version.o 00:07:51.619 CXX test/cpp_headers/vfio_user_pci.o 00:07:51.619 CXX test/cpp_headers/vfio_user_spec.o 00:07:51.619 CXX test/cpp_headers/vhost.o 00:07:51.619 CXX test/cpp_headers/vmd.o 00:07:51.619 CXX test/cpp_headers/xor.o 00:07:51.619 CXX test/cpp_headers/zipf.o 00:07:51.619 LINK poller_perf 00:07:51.619 LINK zipf 00:07:51.619 LINK histogram_perf 00:07:51.619 LINK vtophys 00:07:51.619 LINK stub 00:07:51.619 LINK env_dpdk_post_init 00:07:51.619 LINK spdk_tgt 00:07:51.619 LINK verify 00:07:51.619 LINK ioat_perf 00:07:51.619 LINK bdev_svc 00:07:51.619 LINK spdk_trace 00:07:51.877 LINK spdk_dd 00:07:51.877 LINK llvm_vfio_fuzz 00:07:51.877 LINK pci_ut 00:07:51.877 LINK nvme_fuzz 00:07:51.877 LINK vhost_fuzz 00:07:51.877 LINK test_dma 00:07:51.877 LINK spdk_nvme_identify 00:07:51.877 LINK spdk_nvme 00:07:51.877 LINK spdk_bdev 00:07:51.877 LINK spdk_nvme_perf 00:07:51.877 LINK spdk_top 00:07:52.136 LINK mem_callbacks 00:07:52.136 LINK llvm_nvme_fuzz 00:07:52.136 CC app/vhost/vhost.o 00:07:52.136 CC examples/sock/hello_world/hello_sock.o 00:07:52.136 CC examples/idxd/perf/perf.o 00:07:52.136 CC examples/vmd/led/led.o 00:07:52.136 CC examples/vmd/lsvmd/lsvmd.o 00:07:52.394 CC examples/thread/thread/thread_ex.o 00:07:52.394 LINK led 00:07:52.394 LINK lsvmd 00:07:52.394 LINK vhost 00:07:52.394 LINK hello_sock 00:07:52.394 LINK spdk_lock 00:07:52.394 LINK idxd_perf 00:07:52.394 LINK memory_ut 00:07:52.653 LINK thread 00:07:52.912 LINK iscsi_fuzz 00:07:52.912 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:53.172 CC examples/nvme/abort/abort.o 00:07:53.172 CC test/event/reactor/reactor.o 00:07:53.172 CC examples/nvme/hotplug/hotplug.o 00:07:53.172 CC test/event/app_repeat/app_repeat.o 00:07:53.172 CC examples/nvme/hello_world/hello_world.o 00:07:53.172 CC test/event/reactor_perf/reactor_perf.o 00:07:53.172 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:53.172 CC examples/nvme/reconnect/reconnect.o 00:07:53.172 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:53.172 CC examples/nvme/arbitration/arbitration.o 00:07:53.172 CC test/event/event_perf/event_perf.o 00:07:53.172 CC test/event/scheduler/scheduler.o 00:07:53.172 LINK reactor 00:07:53.172 LINK event_perf 00:07:53.172 LINK pmr_persistence 00:07:53.172 LINK reactor_perf 00:07:53.172 LINK app_repeat 00:07:53.172 LINK cmb_copy 00:07:53.172 LINK hello_world 00:07:53.172 LINK hotplug 00:07:53.172 LINK reconnect 00:07:53.431 LINK nvme_manage 00:07:53.431 LINK scheduler 00:07:53.431 LINK abort 00:07:53.431 LINK arbitration 00:07:53.691 CC test/nvme/reserve/reserve.o 00:07:53.691 CC test/nvme/aer/aer.o 00:07:53.691 CC test/nvme/startup/startup.o 00:07:53.691 CC test/nvme/sgl/sgl.o 00:07:53.691 CC test/nvme/connect_stress/connect_stress.o 00:07:53.691 CC test/nvme/overhead/overhead.o 00:07:53.691 CC test/nvme/boot_partition/boot_partition.o 00:07:53.691 CC test/nvme/reset/reset.o 00:07:53.691 CC test/nvme/err_injection/err_injection.o 00:07:53.691 CC test/nvme/cuse/cuse.o 00:07:53.691 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:53.691 CC test/nvme/fused_ordering/fused_ordering.o 00:07:53.691 CC test/nvme/fdp/fdp.o 00:07:53.691 CC test/nvme/simple_copy/simple_copy.o 00:07:53.691 CC test/nvme/compliance/nvme_compliance.o 00:07:53.691 CC test/nvme/e2edp/nvme_dp.o 00:07:53.691 CC test/accel/dif/dif.o 00:07:53.691 CC test/blobfs/mkfs/mkfs.o 00:07:53.691 CC test/lvol/esnap/esnap.o 00:07:53.691 LINK startup 00:07:53.691 LINK reserve 00:07:53.691 LINK connect_stress 00:07:53.691 LINK boot_partition 00:07:53.691 LINK sgl 00:07:53.951 LINK err_injection 00:07:53.951 LINK overhead 00:07:53.951 LINK doorbell_aers 00:07:53.951 LINK reset 00:07:53.951 LINK fused_ordering 00:07:53.951 LINK aer 00:07:53.951 LINK simple_copy 00:07:53.951 LINK nvme_dp 00:07:53.951 LINK fdp 00:07:53.951 LINK mkfs 00:07:53.951 LINK nvme_compliance 00:07:54.211 CC examples/accel/perf/accel_perf.o 00:07:54.211 CC examples/blob/hello_world/hello_blob.o 00:07:54.211 CC examples/blob/cli/blobcli.o 00:07:54.211 LINK dif 00:07:54.211 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:54.470 LINK hello_blob 00:07:54.470 LINK hello_fsdev 00:07:54.470 LINK accel_perf 00:07:54.731 LINK blobcli 00:07:54.731 LINK cuse 00:07:55.299 CC examples/bdev/bdevperf/bdevperf.o 00:07:55.299 CC examples/bdev/hello_world/hello_bdev.o 00:07:55.559 LINK hello_bdev 00:07:55.818 CC test/bdev/bdevio/bdevio.o 00:07:55.818 LINK bdevperf 00:07:56.077 LINK bdevio 00:07:57.016 LINK esnap 00:07:57.585 CC examples/nvmf/nvmf/nvmf.o 00:07:57.585 LINK nvmf 00:07:58.975 00:07:58.975 real 0m43.800s 00:07:58.975 user 7m3.144s 00:07:58.975 sys 2m9.634s 00:07:58.975 18:04:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:58.975 18:04:36 make -- common/autotest_common.sh@10 -- $ set +x 00:07:58.975 ************************************ 00:07:58.975 END TEST make 00:07:58.975 ************************************ 00:07:59.251 18:04:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:59.251 18:04:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:59.251 18:04:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:59.251 18:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.251 18:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:59.251 18:04:36 -- pm/common@44 -- $ pid=3162798 00:07:59.251 18:04:36 -- pm/common@50 -- $ kill -TERM 3162798 00:07:59.251 18:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.251 18:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:59.251 18:04:36 -- pm/common@44 -- $ pid=3162800 00:07:59.251 18:04:36 -- pm/common@50 -- $ kill -TERM 3162800 00:07:59.251 18:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.251 18:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:59.251 18:04:36 -- pm/common@44 -- $ pid=3162802 00:07:59.251 18:04:36 -- pm/common@50 -- $ kill -TERM 3162802 00:07:59.251 18:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.251 18:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:59.251 18:04:36 -- pm/common@44 -- $ pid=3162836 00:07:59.251 18:04:36 -- pm/common@50 -- $ sudo -E kill -TERM 3162836 00:07:59.251 18:04:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:59.251 18:04:36 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:07:59.251 18:04:36 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.251 18:04:36 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.251 18:04:36 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.251 18:04:36 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.251 18:04:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.251 18:04:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.251 18:04:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.251 18:04:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.251 18:04:36 -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.251 18:04:36 -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.251 18:04:36 -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.251 18:04:36 -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.251 18:04:36 -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.251 18:04:36 -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.251 18:04:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.251 18:04:36 -- scripts/common.sh@344 -- # case "$op" in 00:07:59.251 18:04:36 -- scripts/common.sh@345 -- # : 1 00:07:59.251 18:04:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.251 18:04:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.251 18:04:36 -- scripts/common.sh@365 -- # decimal 1 00:07:59.251 18:04:36 -- scripts/common.sh@353 -- # local d=1 00:07:59.251 18:04:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.251 18:04:36 -- scripts/common.sh@355 -- # echo 1 00:07:59.251 18:04:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.251 18:04:36 -- scripts/common.sh@366 -- # decimal 2 00:07:59.251 18:04:36 -- scripts/common.sh@353 -- # local d=2 00:07:59.251 18:04:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.251 18:04:36 -- scripts/common.sh@355 -- # echo 2 00:07:59.251 18:04:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.251 18:04:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.251 18:04:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.251 18:04:36 -- scripts/common.sh@368 -- # return 0 00:07:59.251 18:04:36 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.251 18:04:36 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.251 --rc genhtml_branch_coverage=1 00:07:59.251 --rc genhtml_function_coverage=1 00:07:59.252 --rc genhtml_legend=1 00:07:59.252 --rc geninfo_all_blocks=1 00:07:59.252 --rc geninfo_unexecuted_blocks=1 00:07:59.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:59.252 ' 00:07:59.252 18:04:36 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.252 --rc genhtml_branch_coverage=1 00:07:59.252 --rc genhtml_function_coverage=1 00:07:59.252 --rc genhtml_legend=1 00:07:59.252 --rc geninfo_all_blocks=1 00:07:59.252 --rc geninfo_unexecuted_blocks=1 00:07:59.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:59.252 ' 00:07:59.252 18:04:36 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.252 --rc genhtml_branch_coverage=1 00:07:59.252 --rc genhtml_function_coverage=1 00:07:59.252 --rc genhtml_legend=1 00:07:59.252 --rc geninfo_all_blocks=1 00:07:59.252 --rc geninfo_unexecuted_blocks=1 00:07:59.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:59.252 ' 00:07:59.252 18:04:36 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.252 --rc genhtml_branch_coverage=1 00:07:59.252 --rc genhtml_function_coverage=1 00:07:59.252 --rc genhtml_legend=1 00:07:59.252 --rc geninfo_all_blocks=1 00:07:59.252 --rc geninfo_unexecuted_blocks=1 00:07:59.252 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:59.252 ' 00:07:59.252 18:04:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.252 18:04:36 -- nvmf/common.sh@7 -- # uname -s 00:07:59.252 18:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.252 18:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.252 18:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.252 18:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.252 18:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.252 18:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.252 18:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.252 18:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.252 18:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.252 18:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.252 18:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0051316f-76a7-e811-906e-00163566263e 00:07:59.252 18:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=0051316f-76a7-e811-906e-00163566263e 00:07:59.252 18:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.252 18:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.252 18:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:59.252 18:04:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.252 18:04:36 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:59.252 18:04:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.252 18:04:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.252 18:04:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.252 18:04:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.252 18:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.252 18:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.252 18:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.252 18:04:36 -- paths/export.sh@5 -- # export PATH 00:07:59.252 18:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.252 18:04:36 -- nvmf/common.sh@51 -- # : 0 00:07:59.252 18:04:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.252 18:04:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.252 18:04:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.252 18:04:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.252 18:04:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.252 18:04:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.252 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.252 18:04:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.252 18:04:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.252 18:04:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.252 18:04:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:59.252 18:04:36 -- spdk/autotest.sh@32 -- # uname -s 00:07:59.252 18:04:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:59.252 18:04:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:59.252 18:04:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:07:59.252 18:04:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:59.252 18:04:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:07:59.252 18:04:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:59.252 18:04:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:59.252 18:04:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:59.252 18:04:36 -- spdk/autotest.sh@48 -- # udevadm_pid=3226183 00:07:59.252 18:04:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:59.252 18:04:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:59.252 18:04:36 -- pm/common@17 -- # local monitor 00:07:59.252 18:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.252 18:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.252 18:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.252 18:04:36 -- pm/common@21 -- # date +%s 00:07:59.252 18:04:36 -- pm/common@21 -- # date +%s 00:07:59.252 18:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.252 18:04:36 -- pm/common@21 -- # date +%s 00:07:59.252 18:04:36 -- pm/common@25 -- # sleep 1 00:07:59.252 18:04:36 -- pm/common@21 -- # date +%s 00:07:59.252 18:04:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640676 00:07:59.252 18:04:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640676 00:07:59.252 18:04:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640676 00:07:59.252 18:04:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640676 00:07:59.530 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640676_collect-cpu-temp.pm.log 00:07:59.530 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640676_collect-vmstat.pm.log 00:07:59.530 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640676_collect-cpu-load.pm.log 00:07:59.530 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640676_collect-bmc-pm.bmc.pm.log 00:08:00.537 18:04:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:00.537 18:04:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:00.537 18:04:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.537 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.537 18:04:37 -- spdk/autotest.sh@59 -- # create_test_list 00:08:00.537 18:04:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:00.537 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.537 18:04:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:08:00.537 18:04:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:00.537 18:04:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:00.537 18:04:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:00.537 18:04:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:00.537 18:04:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:00.538 18:04:37 -- common/autotest_common.sh@1457 -- # uname 00:08:00.538 18:04:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:00.538 18:04:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:00.538 18:04:37 -- common/autotest_common.sh@1477 -- # uname 00:08:00.538 18:04:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:00.538 18:04:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:00.538 18:04:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:08:00.538 lcov: LCOV version 1.15 00:08:00.538 18:04:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:08:08.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:08.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:08:17.076 18:04:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:17.076 18:04:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.076 18:04:53 -- common/autotest_common.sh@10 -- # set +x 00:08:17.076 18:04:53 -- spdk/autotest.sh@78 -- # rm -f 00:08:17.076 18:04:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:18.013 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:08:18.013 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:08:18.013 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:08:18.013 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:08:18.013 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:08:18.013 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:08:18.013 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:08:18.013 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:d9:00.0 (8086 0a54): Already using the nvme driver 00:08:18.273 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:08:18.273 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:08:18.533 18:04:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:18.533 18:04:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:18.533 18:04:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:18.533 18:04:55 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:18.533 18:04:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:18.533 18:04:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:18.533 18:04:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:18.533 18:04:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:18.533 18:04:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:18.533 18:04:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:18.533 18:04:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:18.533 18:04:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:18.533 18:04:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:18.533 18:04:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:18.533 18:04:55 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:18.533 No valid GPT data, bailing 00:08:18.533 18:04:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:18.533 18:04:55 -- scripts/common.sh@394 -- # pt= 00:08:18.533 18:04:55 -- scripts/common.sh@395 -- # return 1 00:08:18.533 18:04:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:18.533 1+0 records in 00:08:18.533 1+0 records out 00:08:18.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00198341 s, 529 MB/s 00:08:18.533 18:04:55 -- spdk/autotest.sh@105 -- # sync 00:08:18.533 18:04:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:18.533 18:04:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:18.533 18:04:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:23.809 18:05:01 -- spdk/autotest.sh@111 -- # uname -s 00:08:23.809 18:05:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:23.809 18:05:01 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:08:23.809 18:05:01 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:08:23.809 18:05:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.809 18:05:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.809 18:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:23.809 ************************************ 00:08:23.809 START TEST setup.sh 00:08:23.809 ************************************ 00:08:23.809 18:05:01 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:08:23.809 * Looking for test storage... 00:08:23.809 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:23.809 18:05:01 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.809 18:05:01 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.809 18:05:01 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.068 18:05:01 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:08:24.068 18:05:01 setup.sh -- scripts/common.sh@345 -- # : 1 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@353 -- # local d=1 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@355 -- # echo 1 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@353 -- # local d=2 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@355 -- # echo 2 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.069 18:05:01 setup.sh -- scripts/common.sh@368 -- # return 0 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.069 --rc genhtml_branch_coverage=1 00:08:24.069 --rc genhtml_function_coverage=1 00:08:24.069 --rc genhtml_legend=1 00:08:24.069 --rc geninfo_all_blocks=1 00:08:24.069 --rc geninfo_unexecuted_blocks=1 00:08:24.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.069 ' 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.069 --rc genhtml_branch_coverage=1 00:08:24.069 --rc genhtml_function_coverage=1 00:08:24.069 --rc genhtml_legend=1 00:08:24.069 --rc geninfo_all_blocks=1 00:08:24.069 --rc geninfo_unexecuted_blocks=1 00:08:24.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.069 ' 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.069 --rc genhtml_branch_coverage=1 00:08:24.069 --rc genhtml_function_coverage=1 00:08:24.069 --rc genhtml_legend=1 00:08:24.069 --rc geninfo_all_blocks=1 00:08:24.069 --rc geninfo_unexecuted_blocks=1 00:08:24.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.069 ' 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.069 --rc genhtml_branch_coverage=1 00:08:24.069 --rc genhtml_function_coverage=1 00:08:24.069 --rc genhtml_legend=1 00:08:24.069 --rc geninfo_all_blocks=1 00:08:24.069 --rc geninfo_unexecuted_blocks=1 00:08:24.069 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.069 ' 00:08:24.069 18:05:01 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:08:24.069 18:05:01 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:24.069 18:05:01 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.069 18:05:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:24.069 ************************************ 00:08:24.069 START TEST acl 00:08:24.069 ************************************ 00:08:24.069 18:05:01 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:08:24.069 * Looking for test storage... 00:08:24.069 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:24.069 18:05:01 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.069 18:05:01 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.069 18:05:01 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.069 18:05:01 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:08:24.069 18:05:01 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.328 18:05:01 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.328 --rc genhtml_branch_coverage=1 00:08:24.328 --rc genhtml_function_coverage=1 00:08:24.328 --rc genhtml_legend=1 00:08:24.328 --rc geninfo_all_blocks=1 00:08:24.328 --rc geninfo_unexecuted_blocks=1 00:08:24.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.328 ' 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.328 --rc genhtml_branch_coverage=1 00:08:24.328 --rc genhtml_function_coverage=1 00:08:24.328 --rc genhtml_legend=1 00:08:24.328 --rc geninfo_all_blocks=1 00:08:24.328 --rc geninfo_unexecuted_blocks=1 00:08:24.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.328 ' 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.328 --rc genhtml_branch_coverage=1 00:08:24.328 --rc genhtml_function_coverage=1 00:08:24.328 --rc genhtml_legend=1 00:08:24.328 --rc geninfo_all_blocks=1 00:08:24.328 --rc geninfo_unexecuted_blocks=1 00:08:24.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.328 ' 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.328 --rc genhtml_branch_coverage=1 00:08:24.328 --rc genhtml_function_coverage=1 00:08:24.328 --rc genhtml_legend=1 00:08:24.328 --rc geninfo_all_blocks=1 00:08:24.328 --rc geninfo_unexecuted_blocks=1 00:08:24.328 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.328 ' 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:24.328 18:05:01 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:08:24.328 18:05:01 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:08:24.328 18:05:01 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:24.328 18:05:01 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:26.866 18:05:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:08:26.866 18:05:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:08:26.866 18:05:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:08:26.866 18:05:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:08:26.866 18:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:26.866 18:05:04 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (8086 == *:*:*.* ]] 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (8086 == *:*:*.* ]] 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:28.772 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.772 Hugepages 00:08:28.772 node hugesize free / total 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 00:08:29.031 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5d:05.5 == *:*:*.* ]] 00:08:29.031 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ vfio-pci == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ae:05.5 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ vfio-pci == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d9:00.0 == *:*:*.* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\9\:\0\0\.\0* ]] 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:08:29.032 18:05:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:08:29.032 18:05:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.032 18:05:06 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.032 18:05:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:29.032 ************************************ 00:08:29.032 START TEST denied 00:08:29.032 ************************************ 00:08:29.032 18:05:06 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:08:29.032 18:05:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d9:00.0' 00:08:29.032 18:05:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:08:29.032 18:05:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d9:00.0' 00:08:29.032 18:05:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:08:29.032 18:05:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:32.320 0000:d9:00.0 (8086 0a54): Skipping denied controller at 0000:d9:00.0 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d9:00.0 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d9:00.0 ]] 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d9:00.0/driver 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:32.320 18:05:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:36.512 00:08:36.512 real 0m6.812s 00:08:36.512 user 0m2.241s 00:08:36.512 sys 0m3.773s 00:08:36.512 18:05:13 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.512 18:05:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 ************************************ 00:08:36.512 END TEST denied 00:08:36.512 ************************************ 00:08:36.512 18:05:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:36.512 18:05:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.512 18:05:13 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.512 18:05:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 ************************************ 00:08:36.512 START TEST allowed 00:08:36.512 ************************************ 00:08:36.512 18:05:13 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:08:36.512 18:05:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d9:00.0 00:08:36.512 18:05:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:08:36.512 18:05:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d9:00.0 .*: nvme -> .*' 00:08:36.512 18:05:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.512 18:05:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:43.080 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:08:43.080 18:05:19 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:08:43.080 18:05:19 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:08:43.080 18:05:19 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:08:43.080 18:05:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:43.080 18:05:19 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:44.985 00:08:44.985 real 0m9.033s 00:08:44.985 user 0m2.088s 00:08:44.985 sys 0m3.739s 00:08:44.985 18:05:22 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.985 18:05:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:08:44.985 ************************************ 00:08:44.985 END TEST allowed 00:08:44.985 ************************************ 00:08:44.985 00:08:44.985 real 0m21.077s 00:08:44.985 user 0m6.229s 00:08:44.985 sys 0m10.709s 00:08:44.985 18:05:22 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.985 18:05:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:44.985 ************************************ 00:08:44.985 END TEST acl 00:08:44.985 ************************************ 00:08:45.245 18:05:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:08:45.245 18:05:22 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.245 18:05:22 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.245 18:05:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:45.245 ************************************ 00:08:45.245 START TEST hugepages 00:08:45.245 ************************************ 00:08:45.245 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:08:45.245 * Looking for test storage... 00:08:45.245 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:45.245 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.245 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.245 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.245 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.245 18:05:22 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.246 18:05:22 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:08:45.246 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.246 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:45.246 ' 00:08:45.246 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:45.246 ' 00:08:45.246 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:45.246 ' 00:08:45.246 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:45.246 ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 166107616 kB' 'MemAvailable: 169788236 kB' 'Buffers: 6836 kB' 'Cached: 16683476 kB' 'SwapCached: 0 kB' 'Active: 13315700 kB' 'Inactive: 4055652 kB' 'Active(anon): 12832080 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 684356 kB' 'Mapped: 193880 kB' 'Shmem: 12151040 kB' 'KReclaimable: 567124 kB' 'Slab: 1330312 kB' 'SReclaimable: 567124 kB' 'SUnreclaim: 763188 kB' 'KernelStack: 21952 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101974028 kB' 'Committed_AS: 14276208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321644 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.246 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.247 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.507 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:08:45.508 18:05:22 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:08:45.508 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.508 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.508 18:05:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:45.508 ************************************ 00:08:45.508 START TEST single_node_setup 00:08:45.508 ************************************ 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:08:45.508 18:05:22 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:48.057 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:08:48.057 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:08:48.057 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:48.057 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:48.058 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:48.058 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:51.360 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168380596 kB' 'MemAvailable: 172060872 kB' 'Buffers: 6836 kB' 'Cached: 16683640 kB' 'SwapCached: 0 kB' 'Active: 13319096 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835476 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687908 kB' 'Mapped: 193664 kB' 'Shmem: 12151204 kB' 'KReclaimable: 566780 kB' 'Slab: 1327572 kB' 'SReclaimable: 566780 kB' 'SUnreclaim: 760792 kB' 'KernelStack: 21984 kB' 'PageTables: 9832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14280880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321692 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.625 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.626 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168379600 kB' 'MemAvailable: 172059876 kB' 'Buffers: 6836 kB' 'Cached: 16683640 kB' 'SwapCached: 0 kB' 'Active: 13318964 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835344 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687324 kB' 'Mapped: 193644 kB' 'Shmem: 12151204 kB' 'KReclaimable: 566780 kB' 'Slab: 1327416 kB' 'SReclaimable: 566780 kB' 'SUnreclaim: 760636 kB' 'KernelStack: 22160 kB' 'PageTables: 10116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14282400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321708 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.627 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.628 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168379252 kB' 'MemAvailable: 172059528 kB' 'Buffers: 6836 kB' 'Cached: 16683660 kB' 'SwapCached: 0 kB' 'Active: 13319444 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835824 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687824 kB' 'Mapped: 193536 kB' 'Shmem: 12151224 kB' 'KReclaimable: 566780 kB' 'Slab: 1327392 kB' 'SReclaimable: 566780 kB' 'SUnreclaim: 760612 kB' 'KernelStack: 22112 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14282424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321756 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.629 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.630 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.631 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:08:51.632 nr_hugepages=1024 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:51.632 resv_hugepages=0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:51.632 surplus_hugepages=0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:51.632 anon_hugepages=0 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168379456 kB' 'MemAvailable: 172059732 kB' 'Buffers: 6836 kB' 'Cached: 16683660 kB' 'SwapCached: 0 kB' 'Active: 13319680 kB' 'Inactive: 4055652 kB' 'Active(anon): 12836060 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688060 kB' 'Mapped: 193536 kB' 'Shmem: 12151224 kB' 'KReclaimable: 566780 kB' 'Slab: 1327392 kB' 'SReclaimable: 566780 kB' 'SUnreclaim: 760612 kB' 'KernelStack: 22128 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14282448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321756 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.632 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.633 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 83344420 kB' 'MemUsed: 14262436 kB' 'SwapCached: 0 kB' 'Active: 6495668 kB' 'Inactive: 3947212 kB' 'Active(anon): 6174060 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9920600 kB' 'Mapped: 119564 kB' 'AnonPages: 525428 kB' 'Shmem: 5651780 kB' 'KernelStack: 13032 kB' 'PageTables: 6264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396060 kB' 'Slab: 804364 kB' 'SReclaimable: 396060 kB' 'SUnreclaim: 408304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.634 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.635 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:08:51.636 node0=1024 expecting 1024 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:08:51.636 00:08:51.636 real 0m6.269s 00:08:51.636 user 0m1.163s 00:08:51.636 sys 0m1.863s 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.636 18:05:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 ************************************ 00:08:51.636 END TEST single_node_setup 00:08:51.636 ************************************ 00:08:51.636 18:05:29 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:08:51.636 18:05:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.636 18:05:29 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.636 18:05:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 ************************************ 00:08:51.636 START TEST even_2G_alloc 00:08:51.636 ************************************ 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.636 18:05:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:54.186 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:08:54.186 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:08:54.186 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:54.186 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:54.186 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.186 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168558312 kB' 'MemAvailable: 172238604 kB' 'Buffers: 6836 kB' 'Cached: 16683784 kB' 'SwapCached: 0 kB' 'Active: 13319044 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835424 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687372 kB' 'Mapped: 193724 kB' 'Shmem: 12151348 kB' 'KReclaimable: 566796 kB' 'Slab: 1327732 kB' 'SReclaimable: 566796 kB' 'SUnreclaim: 760936 kB' 'KernelStack: 21984 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14280368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321788 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.187 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:54.188 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168558856 kB' 'MemAvailable: 172239148 kB' 'Buffers: 6836 kB' 'Cached: 16683788 kB' 'SwapCached: 0 kB' 'Active: 13318748 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835128 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687132 kB' 'Mapped: 193488 kB' 'Shmem: 12151352 kB' 'KReclaimable: 566796 kB' 'Slab: 1327748 kB' 'SReclaimable: 566796 kB' 'SUnreclaim: 760952 kB' 'KernelStack: 21984 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14280384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321756 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.189 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.190 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.191 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168559056 kB' 'MemAvailable: 172239348 kB' 'Buffers: 6836 kB' 'Cached: 16683808 kB' 'SwapCached: 0 kB' 'Active: 13318784 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835164 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687124 kB' 'Mapped: 193488 kB' 'Shmem: 12151372 kB' 'KReclaimable: 566796 kB' 'Slab: 1327748 kB' 'SReclaimable: 566796 kB' 'SUnreclaim: 760952 kB' 'KernelStack: 21984 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14280404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321772 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.192 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.193 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.194 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:08:54.195 nr_hugepages=1024 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:54.195 resv_hugepages=0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:54.195 surplus_hugepages=0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:54.195 anon_hugepages=0 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168558048 kB' 'MemAvailable: 172238340 kB' 'Buffers: 6836 kB' 'Cached: 16683824 kB' 'SwapCached: 0 kB' 'Active: 13319000 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835380 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687268 kB' 'Mapped: 193488 kB' 'Shmem: 12151388 kB' 'KReclaimable: 566796 kB' 'Slab: 1327748 kB' 'SReclaimable: 566796 kB' 'SUnreclaim: 760952 kB' 'KernelStack: 21984 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14280576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321772 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.195 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.196 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.197 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 84496048 kB' 'MemUsed: 13110808 kB' 'SwapCached: 0 kB' 'Active: 6494988 kB' 'Inactive: 3947212 kB' 'Active(anon): 6173380 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9920712 kB' 'Mapped: 119572 kB' 'AnonPages: 524684 kB' 'Shmem: 5651892 kB' 'KernelStack: 13128 kB' 'PageTables: 6476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396092 kB' 'Slab: 804848 kB' 'SReclaimable: 396092 kB' 'SUnreclaim: 408756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.198 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.199 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93758296 kB' 'MemFree: 84061628 kB' 'MemUsed: 9696668 kB' 'SwapCached: 0 kB' 'Active: 6823784 kB' 'Inactive: 108440 kB' 'Active(anon): 6661772 kB' 'Inactive(anon): 0 kB' 'Active(file): 162012 kB' 'Inactive(file): 108440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6769952 kB' 'Mapped: 73968 kB' 'AnonPages: 162392 kB' 'Shmem: 6499500 kB' 'KernelStack: 8824 kB' 'PageTables: 2776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 170704 kB' 'Slab: 522900 kB' 'SReclaimable: 170704 kB' 'SUnreclaim: 352196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.200 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.201 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:08:54.202 node0=512 expecting 512 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:08:54.202 node1=512 expecting 512 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:08:54.202 00:08:54.202 real 0m2.485s 00:08:54.202 user 0m0.958s 00:08:54.202 sys 0m1.427s 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.202 18:05:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:54.202 ************************************ 00:08:54.202 END TEST even_2G_alloc 00:08:54.202 ************************************ 00:08:54.202 18:05:31 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:08:54.202 18:05:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.202 18:05:31 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.202 18:05:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:54.202 ************************************ 00:08:54.202 START TEST odd_alloc 00:08:54.202 ************************************ 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:54.202 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:08:54.203 18:05:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:08:54.203 18:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:54.203 18:05:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:56.744 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:08:56.744 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:08:56.744 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:56.744 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:56.744 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168624820 kB' 'MemAvailable: 172305072 kB' 'Buffers: 6836 kB' 'Cached: 16683940 kB' 'SwapCached: 0 kB' 'Active: 13318436 kB' 'Inactive: 4055652 kB' 'Active(anon): 12834816 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 686424 kB' 'Mapped: 192760 kB' 'Shmem: 12151504 kB' 'KReclaimable: 566756 kB' 'Slab: 1327248 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760492 kB' 'KernelStack: 21904 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103021580 kB' 'Committed_AS: 14272532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321868 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.744 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.745 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168617880 kB' 'MemAvailable: 172298132 kB' 'Buffers: 6836 kB' 'Cached: 16683944 kB' 'SwapCached: 0 kB' 'Active: 13321468 kB' 'Inactive: 4055652 kB' 'Active(anon): 12837848 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689448 kB' 'Mapped: 193236 kB' 'Shmem: 12151508 kB' 'KReclaimable: 566756 kB' 'Slab: 1327240 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760484 kB' 'KernelStack: 21888 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103021580 kB' 'Committed_AS: 14275128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321788 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.746 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168614244 kB' 'MemAvailable: 172294496 kB' 'Buffers: 6836 kB' 'Cached: 16683960 kB' 'SwapCached: 0 kB' 'Active: 13323876 kB' 'Inactive: 4055652 kB' 'Active(anon): 12840256 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691900 kB' 'Mapped: 193416 kB' 'Shmem: 12151524 kB' 'KReclaimable: 566756 kB' 'Slab: 1327324 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760568 kB' 'KernelStack: 21872 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103021580 kB' 'Committed_AS: 14277668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321792 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.747 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.748 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:08:56.749 nr_hugepages=1025 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:56.749 resv_hugepages=0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:56.749 surplus_hugepages=0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:56.749 anon_hugepages=0 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.749 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168616324 kB' 'MemAvailable: 172296576 kB' 'Buffers: 6836 kB' 'Cached: 16683984 kB' 'SwapCached: 0 kB' 'Active: 13323624 kB' 'Inactive: 4055652 kB' 'Active(anon): 12840004 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691600 kB' 'Mapped: 193480 kB' 'Shmem: 12151548 kB' 'KReclaimable: 566756 kB' 'Slab: 1327292 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760536 kB' 'KernelStack: 21888 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103021580 kB' 'Committed_AS: 14277688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321792 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:56.750 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.012 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 84512632 kB' 'MemUsed: 13094224 kB' 'SwapCached: 0 kB' 'Active: 6495360 kB' 'Inactive: 3947212 kB' 'Active(anon): 6173752 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9920880 kB' 'Mapped: 119012 kB' 'AnonPages: 524828 kB' 'Shmem: 5652060 kB' 'KernelStack: 13048 kB' 'PageTables: 6128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396052 kB' 'Slab: 804520 kB' 'SReclaimable: 396052 kB' 'SUnreclaim: 408468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.013 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.014 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93758296 kB' 'MemFree: 84112692 kB' 'MemUsed: 9645604 kB' 'SwapCached: 0 kB' 'Active: 6822360 kB' 'Inactive: 108440 kB' 'Active(anon): 6660348 kB' 'Inactive(anon): 0 kB' 'Active(file): 162012 kB' 'Inactive(file): 108440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6769960 kB' 'Mapped: 73616 kB' 'AnonPages: 160896 kB' 'Shmem: 6499508 kB' 'KernelStack: 8808 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 170704 kB' 'Slab: 522772 kB' 'SReclaimable: 170704 kB' 'SUnreclaim: 352068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.015 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:08:57.016 node0=513 expecting 513 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:08:57.016 node1=512 expecting 512 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:08:57.016 00:08:57.016 real 0m2.675s 00:08:57.016 user 0m1.075s 00:08:57.016 sys 0m1.595s 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.016 18:05:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:57.016 ************************************ 00:08:57.016 END TEST odd_alloc 00:08:57.016 ************************************ 00:08:57.016 18:05:34 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:08:57.016 18:05:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.016 18:05:34 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.016 18:05:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:57.016 ************************************ 00:08:57.016 START TEST custom_alloc 00:08:57.016 ************************************ 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:57.016 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:57.017 18:05:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:58.921 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:08:58.921 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:08:58.921 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:58.921 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:59.185 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:59.185 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 167603420 kB' 'MemAvailable: 171283672 kB' 'Buffers: 6836 kB' 'Cached: 16684092 kB' 'SwapCached: 0 kB' 'Active: 13318440 kB' 'Inactive: 4055652 kB' 'Active(anon): 12834820 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 686348 kB' 'Mapped: 192664 kB' 'Shmem: 12151656 kB' 'KReclaimable: 566756 kB' 'Slab: 1327540 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760784 kB' 'KernelStack: 21856 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102498316 kB' 'Committed_AS: 14272052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321804 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.185 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.186 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 167604348 kB' 'MemAvailable: 171284600 kB' 'Buffers: 6836 kB' 'Cached: 16684096 kB' 'SwapCached: 0 kB' 'Active: 13318140 kB' 'Inactive: 4055652 kB' 'Active(anon): 12834520 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 686008 kB' 'Mapped: 192640 kB' 'Shmem: 12151660 kB' 'KReclaimable: 566756 kB' 'Slab: 1327596 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760840 kB' 'KernelStack: 21824 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102498316 kB' 'Committed_AS: 14272068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321772 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.187 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.188 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.189 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 167604728 kB' 'MemAvailable: 171284980 kB' 'Buffers: 6836 kB' 'Cached: 16684096 kB' 'SwapCached: 0 kB' 'Active: 13317840 kB' 'Inactive: 4055652 kB' 'Active(anon): 12834220 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 685696 kB' 'Mapped: 192640 kB' 'Shmem: 12151660 kB' 'KReclaimable: 566756 kB' 'Slab: 1327564 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760808 kB' 'KernelStack: 21824 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102498316 kB' 'Committed_AS: 14272088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321788 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.190 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.191 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:08:59.192 nr_hugepages=1536 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:59.192 resv_hugepages=0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:59.192 surplus_hugepages=0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:59.192 anon_hugepages=0 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 167604224 kB' 'MemAvailable: 171284476 kB' 'Buffers: 6836 kB' 'Cached: 16684096 kB' 'SwapCached: 0 kB' 'Active: 13318784 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835164 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 686708 kB' 'Mapped: 192640 kB' 'Shmem: 12151660 kB' 'KReclaimable: 566756 kB' 'Slab: 1327564 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760808 kB' 'KernelStack: 21888 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102498316 kB' 'Committed_AS: 14273528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321772 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.192 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.193 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.455 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.456 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 84525168 kB' 'MemUsed: 13081688 kB' 'SwapCached: 0 kB' 'Active: 6494600 kB' 'Inactive: 3947212 kB' 'Active(anon): 6172992 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9920940 kB' 'Mapped: 119012 kB' 'AnonPages: 523972 kB' 'Shmem: 5652120 kB' 'KernelStack: 13000 kB' 'PageTables: 6024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396052 kB' 'Slab: 804744 kB' 'SReclaimable: 396052 kB' 'SUnreclaim: 408692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.457 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93758296 kB' 'MemFree: 83092200 kB' 'MemUsed: 10666096 kB' 'SwapCached: 0 kB' 'Active: 6824028 kB' 'Inactive: 108440 kB' 'Active(anon): 6662016 kB' 'Inactive(anon): 0 kB' 'Active(file): 162012 kB' 'Inactive(file): 108440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6770072 kB' 'Mapped: 73628 kB' 'AnonPages: 162496 kB' 'Shmem: 6499620 kB' 'KernelStack: 8792 kB' 'PageTables: 2644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 170704 kB' 'Slab: 522844 kB' 'SReclaimable: 170704 kB' 'SUnreclaim: 352140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.458 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:08:59.459 node0=512 expecting 512 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:08:59.459 node1=1024 expecting 1024 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:08:59.459 00:08:59.459 real 0m2.353s 00:08:59.459 user 0m0.829s 00:08:59.459 sys 0m1.359s 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.459 18:05:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:59.459 ************************************ 00:08:59.459 END TEST custom_alloc 00:08:59.459 ************************************ 00:08:59.459 18:05:36 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:59.459 18:05:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.459 18:05:36 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.459 18:05:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:59.459 ************************************ 00:08:59.459 START TEST no_shrink_alloc 00:08:59.459 ************************************ 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:08:59.459 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:59.460 18:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:01.368 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:01.368 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:01.368 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:01.368 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:01.368 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:01.368 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:01.631 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:01.631 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168706508 kB' 'MemAvailable: 172386760 kB' 'Buffers: 6836 kB' 'Cached: 16684248 kB' 'SwapCached: 0 kB' 'Active: 13320536 kB' 'Inactive: 4055652 kB' 'Active(anon): 12836916 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688096 kB' 'Mapped: 192648 kB' 'Shmem: 12151812 kB' 'KReclaimable: 566756 kB' 'Slab: 1327340 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760584 kB' 'KernelStack: 22048 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14274864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321820 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.631 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.632 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168706936 kB' 'MemAvailable: 172387188 kB' 'Buffers: 6836 kB' 'Cached: 16684252 kB' 'SwapCached: 0 kB' 'Active: 13319480 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835860 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687276 kB' 'Mapped: 192632 kB' 'Shmem: 12151816 kB' 'KReclaimable: 566756 kB' 'Slab: 1327316 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760560 kB' 'KernelStack: 22080 kB' 'PageTables: 9624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14274888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321852 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.633 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.634 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168707088 kB' 'MemAvailable: 172387340 kB' 'Buffers: 6836 kB' 'Cached: 16684268 kB' 'SwapCached: 0 kB' 'Active: 13319764 kB' 'Inactive: 4055652 kB' 'Active(anon): 12836144 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687496 kB' 'Mapped: 192684 kB' 'Shmem: 12151832 kB' 'KReclaimable: 566756 kB' 'Slab: 1327368 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760612 kB' 'KernelStack: 22128 kB' 'PageTables: 10052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14275044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321900 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.635 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.902 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:09:01.903 nr_hugepages=1024 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:01.903 resv_hugepages=0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:01.903 surplus_hugepages=0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:01.903 anon_hugepages=0 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168708980 kB' 'MemAvailable: 172389232 kB' 'Buffers: 6836 kB' 'Cached: 16684304 kB' 'SwapCached: 0 kB' 'Active: 13319588 kB' 'Inactive: 4055652 kB' 'Active(anon): 12835968 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687268 kB' 'Mapped: 192684 kB' 'Shmem: 12151868 kB' 'KReclaimable: 566756 kB' 'Slab: 1327368 kB' 'SReclaimable: 566756 kB' 'SUnreclaim: 760612 kB' 'KernelStack: 22032 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14275240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321820 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.903 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.904 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 83507548 kB' 'MemUsed: 14099308 kB' 'SwapCached: 0 kB' 'Active: 6495312 kB' 'Inactive: 3947212 kB' 'Active(anon): 6173704 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9921060 kB' 'Mapped: 119044 kB' 'AnonPages: 524660 kB' 'Shmem: 5652240 kB' 'KernelStack: 13064 kB' 'PageTables: 6220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396052 kB' 'Slab: 804472 kB' 'SReclaimable: 396052 kB' 'SUnreclaim: 408420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.905 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.906 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:09:01.907 node0=1024 expecting 1024 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:09:01.907 18:05:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:04.455 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:04.455 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:04.455 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:d9:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:04.455 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:04.455 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:04.455 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168738436 kB' 'MemAvailable: 172418656 kB' 'Buffers: 6836 kB' 'Cached: 16684396 kB' 'SwapCached: 0 kB' 'Active: 13321768 kB' 'Inactive: 4055652 kB' 'Active(anon): 12838148 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689516 kB' 'Mapped: 193208 kB' 'Shmem: 12151960 kB' 'KReclaimable: 566724 kB' 'Slab: 1327236 kB' 'SReclaimable: 566724 kB' 'SUnreclaim: 760512 kB' 'KernelStack: 21920 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14276652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321724 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.455 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:04.456 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168731068 kB' 'MemAvailable: 172411288 kB' 'Buffers: 6836 kB' 'Cached: 16684400 kB' 'SwapCached: 0 kB' 'Active: 13325100 kB' 'Inactive: 4055652 kB' 'Active(anon): 12841480 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 693384 kB' 'Mapped: 193200 kB' 'Shmem: 12151964 kB' 'KReclaimable: 566724 kB' 'Slab: 1327276 kB' 'SReclaimable: 566724 kB' 'SUnreclaim: 760552 kB' 'KernelStack: 21872 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14281052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321728 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.457 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:04.458 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168742888 kB' 'MemAvailable: 172423108 kB' 'Buffers: 6836 kB' 'Cached: 16684416 kB' 'SwapCached: 0 kB' 'Active: 13320328 kB' 'Inactive: 4055652 kB' 'Active(anon): 12836708 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688188 kB' 'Mapped: 193096 kB' 'Shmem: 12151980 kB' 'KReclaimable: 566724 kB' 'Slab: 1327248 kB' 'SReclaimable: 566724 kB' 'SUnreclaim: 760524 kB' 'KernelStack: 21936 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14275940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321724 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.459 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.460 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:09:04.461 nr_hugepages=1024 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:04.461 resv_hugepages=0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:04.461 surplus_hugepages=0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:04.461 anon_hugepages=0 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191365152 kB' 'MemFree: 168746224 kB' 'MemAvailable: 172426444 kB' 'Buffers: 6836 kB' 'Cached: 16684436 kB' 'SwapCached: 0 kB' 'Active: 13320760 kB' 'Inactive: 4055652 kB' 'Active(anon): 12837140 kB' 'Inactive(anon): 0 kB' 'Active(file): 483620 kB' 'Inactive(file): 4055652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688524 kB' 'Mapped: 192712 kB' 'Shmem: 12152000 kB' 'KReclaimable: 566724 kB' 'Slab: 1327248 kB' 'SReclaimable: 566724 kB' 'SUnreclaim: 760524 kB' 'KernelStack: 22016 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103022604 kB' 'Committed_AS: 14275964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 321724 kB' 'VmallocChunk: 0 kB' 'Percpu: 356608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 852948 kB' 'DirectMap2M: 42866688 kB' 'DirectMap1G: 159383552 kB' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.461 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:04.462 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97606856 kB' 'MemFree: 83506420 kB' 'MemUsed: 14100436 kB' 'SwapCached: 0 kB' 'Active: 6497296 kB' 'Inactive: 3947212 kB' 'Active(anon): 6175688 kB' 'Inactive(anon): 0 kB' 'Active(file): 321608 kB' 'Inactive(file): 3947212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9921172 kB' 'Mapped: 119048 kB' 'AnonPages: 526620 kB' 'Shmem: 5652352 kB' 'KernelStack: 13064 kB' 'PageTables: 6220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396020 kB' 'Slab: 804088 kB' 'SReclaimable: 396020 kB' 'SUnreclaim: 408068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.463 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:09:04.464 node0=1024 expecting 1024 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:09:04.464 00:09:04.464 real 0m5.080s 00:09:04.464 user 0m2.012s 00:09:04.464 sys 0m3.022s 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.464 18:05:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:09:04.464 ************************************ 00:09:04.464 END TEST no_shrink_alloc 00:09:04.464 ************************************ 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:09:04.464 18:05:41 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:09:04.464 00:09:04.464 real 0m19.383s 00:09:04.464 user 0m6.295s 00:09:04.464 sys 0m9.557s 00:09:04.464 18:05:41 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.464 18:05:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:09:04.464 ************************************ 00:09:04.464 END TEST hugepages 00:09:04.464 ************************************ 00:09:04.723 18:05:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:09:04.723 18:05:41 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.723 18:05:41 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.723 18:05:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:04.723 ************************************ 00:09:04.723 START TEST driver 00:09:04.723 ************************************ 00:09:04.723 18:05:41 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:09:04.723 * Looking for test storage... 00:09:04.723 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.723 18:05:42 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.723 18:05:42 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.723 --rc genhtml_branch_coverage=1 00:09:04.723 --rc genhtml_function_coverage=1 00:09:04.723 --rc genhtml_legend=1 00:09:04.723 --rc geninfo_all_blocks=1 00:09:04.723 --rc geninfo_unexecuted_blocks=1 00:09:04.723 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:04.724 ' 00:09:04.724 18:05:42 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.724 --rc genhtml_branch_coverage=1 00:09:04.724 --rc genhtml_function_coverage=1 00:09:04.724 --rc genhtml_legend=1 00:09:04.724 --rc geninfo_all_blocks=1 00:09:04.724 --rc geninfo_unexecuted_blocks=1 00:09:04.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:04.724 ' 00:09:04.724 18:05:42 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.724 --rc genhtml_branch_coverage=1 00:09:04.724 --rc genhtml_function_coverage=1 00:09:04.724 --rc genhtml_legend=1 00:09:04.724 --rc geninfo_all_blocks=1 00:09:04.724 --rc geninfo_unexecuted_blocks=1 00:09:04.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:04.724 ' 00:09:04.724 18:05:42 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.724 --rc genhtml_branch_coverage=1 00:09:04.724 --rc genhtml_function_coverage=1 00:09:04.724 --rc genhtml_legend=1 00:09:04.724 --rc geninfo_all_blocks=1 00:09:04.724 --rc geninfo_unexecuted_blocks=1 00:09:04.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:04.724 ' 00:09:04.724 18:05:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:09:04.724 18:05:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:04.724 18:05:42 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:08.917 18:05:46 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:09:08.917 18:05:46 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.917 18:05:46 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.917 18:05:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:08.917 ************************************ 00:09:08.917 START TEST guess_driver 00:09:08.917 ************************************ 00:09:08.917 18:05:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:09:08.917 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:09:08.918 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:09:08.918 Looking for driver=vfio-pci 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:09:08.918 18:05:46 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ not == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ not == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.521 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:11.780 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:11.780 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:11.780 18:05:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:15.067 18:05:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:19.257 00:09:19.257 real 0m10.234s 00:09:19.257 user 0m2.287s 00:09:19.257 sys 0m3.956s 00:09:19.257 18:05:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.257 18:05:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:09:19.257 ************************************ 00:09:19.257 END TEST guess_driver 00:09:19.257 ************************************ 00:09:19.257 00:09:19.257 real 0m14.406s 00:09:19.257 user 0m3.511s 00:09:19.257 sys 0m6.094s 00:09:19.257 18:05:56 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.257 18:05:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:19.257 ************************************ 00:09:19.257 END TEST driver 00:09:19.257 ************************************ 00:09:19.257 18:05:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:09:19.257 18:05:56 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.257 18:05:56 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.257 18:05:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:19.257 ************************************ 00:09:19.257 START TEST devices 00:09:19.257 ************************************ 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:09:19.257 * Looking for test storage... 00:09:19.257 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.257 18:05:56 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.257 --rc genhtml_branch_coverage=1 00:09:19.257 --rc genhtml_function_coverage=1 00:09:19.257 --rc genhtml_legend=1 00:09:19.257 --rc geninfo_all_blocks=1 00:09:19.257 --rc geninfo_unexecuted_blocks=1 00:09:19.257 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.257 ' 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.257 --rc genhtml_branch_coverage=1 00:09:19.257 --rc genhtml_function_coverage=1 00:09:19.257 --rc genhtml_legend=1 00:09:19.257 --rc geninfo_all_blocks=1 00:09:19.257 --rc geninfo_unexecuted_blocks=1 00:09:19.257 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.257 ' 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.257 --rc genhtml_branch_coverage=1 00:09:19.257 --rc genhtml_function_coverage=1 00:09:19.257 --rc genhtml_legend=1 00:09:19.257 --rc geninfo_all_blocks=1 00:09:19.257 --rc geninfo_unexecuted_blocks=1 00:09:19.257 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.257 ' 00:09:19.257 18:05:56 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.258 --rc genhtml_branch_coverage=1 00:09:19.258 --rc genhtml_function_coverage=1 00:09:19.258 --rc genhtml_legend=1 00:09:19.258 --rc geninfo_all_blocks=1 00:09:19.258 --rc geninfo_unexecuted_blocks=1 00:09:19.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:19.258 ' 00:09:19.258 18:05:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:09:19.258 18:05:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:09:19.258 18:05:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:19.258 18:05:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d9:00.0 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\9\:\0\0\.\0* ]] 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:09:22.544 18:05:59 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:09:22.544 No valid GPT data, bailing 00:09:22.544 18:05:59 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:09:22.544 18:05:59 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:22.544 18:05:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d9:00.0 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.544 18:05:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:22.544 ************************************ 00:09:22.544 START TEST nvme_mount 00:09:22.544 ************************************ 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:22.544 18:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:09:23.111 Creating new GPT entries in memory. 00:09:23.111 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:23.111 other utilities. 00:09:23.112 18:06:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:23.112 18:06:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:23.112 18:06:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:23.112 18:06:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:23.112 18:06:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:24.049 Creating new GPT entries in memory. 00:09:24.049 The operation has completed successfully. 00:09:24.049 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:24.049 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:24.050 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3255244 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d9:00.0 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:24.308 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d9:00.0 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:24.309 18:06:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d9:00.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:26.845 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:26.845 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:26.846 18:06:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:26.846 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:26.846 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:09:26.846 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:26.846 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:09:26.846 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d9:00.0 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d9:00.0 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:27.117 18:06:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:29.195 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d9:00.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:29.454 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d9:00.0 data@nvme0n1 '' '' 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d9:00.0 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d9:00.0 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:29.712 18:06:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:32.244 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.244 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.244 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.244 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d9:00.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:32.245 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:32.504 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:32.504 00:09:32.504 real 0m10.314s 00:09:32.504 user 0m2.944s 00:09:32.504 sys 0m5.040s 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.504 18:06:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:09:32.504 ************************************ 00:09:32.504 END TEST nvme_mount 00:09:32.504 ************************************ 00:09:32.504 18:06:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:09:32.504 18:06:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.504 18:06:09 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.504 18:06:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:32.504 ************************************ 00:09:32.504 START TEST dm_mount 00:09:32.504 ************************************ 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:32.504 18:06:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:09:33.462 Creating new GPT entries in memory. 00:09:33.462 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:33.462 other utilities. 00:09:33.462 18:06:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:33.462 18:06:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:33.462 18:06:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:33.462 18:06:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:33.462 18:06:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:34.396 Creating new GPT entries in memory. 00:09:34.396 The operation has completed successfully. 00:09:34.396 18:06:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:34.396 18:06:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:34.396 18:06:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:34.396 18:06:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:34.653 18:06:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:09:35.587 The operation has completed successfully. 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3260012 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:09:35.587 18:06:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d9:00.0 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d9:00.0 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:35.587 18:06:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d9:00.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.119 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:38.120 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:38.379 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:09:38.379 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d9:00.0 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d9:00.0 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:38.380 18:06:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5d:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.331 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d9:00.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.591 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\9\:\0\0\.\0 ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:40.592 18:06:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:09:40.851 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:09:40.851 00:09:40.851 real 0m8.261s 00:09:40.851 user 0m1.908s 00:09:40.851 sys 0m3.295s 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.851 18:06:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:09:40.851 ************************************ 00:09:40.851 END TEST dm_mount 00:09:40.851 ************************************ 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:40.851 18:06:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:41.110 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:41.110 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:09:41.110 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:41.110 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:09:41.110 18:06:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:09:41.110 00:09:41.110 real 0m21.953s 00:09:41.110 user 0m6.132s 00:09:41.110 sys 0m10.291s 00:09:41.110 18:06:18 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.110 18:06:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:41.110 ************************************ 00:09:41.110 END TEST devices 00:09:41.110 ************************************ 00:09:41.110 00:09:41.110 real 1m17.270s 00:09:41.110 user 0m22.376s 00:09:41.110 sys 0m36.927s 00:09:41.110 18:06:18 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.110 18:06:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:41.110 ************************************ 00:09:41.110 END TEST setup.sh 00:09:41.110 ************************************ 00:09:41.110 18:06:18 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:09:43.645 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:43.645 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:43.645 Hugepages 00:09:43.645 node hugesize free / total 00:09:43.645 node0 1048576kB 0 / 0 00:09:43.645 node0 2048kB 1024 / 1024 00:09:43.645 node1 1048576kB 0 / 0 00:09:43.645 node1 2048kB 1024 / 1024 00:09:43.645 00:09:43.646 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:43.646 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:43.646 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:43.646 VMD 0000:5d:05.5 8086 201d 0 vfio-pci - - 00:09:43.646 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:43.646 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:43.646 VMD 0000:ae:05.5 8086 201d 1 vfio-pci - - 00:09:43.905 NVMe 0000:d9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:09:43.905 18:06:21 -- spdk/autotest.sh@117 -- # uname -s 00:09:43.905 18:06:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:43.905 18:06:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:43.905 18:06:21 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:45.811 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:46.070 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:46.070 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:46.070 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:46.070 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:46.070 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:46.070 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:46.070 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:46.329 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:49.824 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:09:49.824 18:06:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:50.762 18:06:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:50.762 18:06:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:50.762 18:06:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:50.762 18:06:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:50.762 18:06:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:50.762 18:06:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:50.762 18:06:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:50.762 18:06:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:50.762 18:06:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:50.762 18:06:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:50.762 18:06:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d9:00.0 00:09:50.762 18:06:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:53.297 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:53.297 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:53.297 Waiting for block devices as requested 00:09:53.297 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:53.297 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:53.556 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:53.556 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:53.556 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:53.556 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:53.815 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:53.815 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:53.815 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:09:54.074 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:54.074 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:54.074 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:54.334 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:54.334 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:54.334 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:54.594 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:54.594 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:54.594 18:06:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:54.594 18:06:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d9:00.0 00:09:54.594 18:06:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:54.594 18:06:31 -- common/autotest_common.sh@1487 -- # grep 0000:d9:00.0/nvme/nvme 00:09:54.594 18:06:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme0 ]] 00:09:54.594 18:06:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:01.0/0000:d9:00.0/nvme/nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:54.594 18:06:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:54.594 18:06:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:54.594 18:06:32 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:09:54.594 18:06:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:54.594 18:06:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:54.594 18:06:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:54.594 18:06:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:54.594 18:06:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:54.594 18:06:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:54.594 18:06:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:54.594 18:06:32 -- common/autotest_common.sh@1543 -- # continue 00:09:54.594 18:06:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:54.594 18:06:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.594 18:06:32 -- common/autotest_common.sh@10 -- # set +x 00:09:54.853 18:06:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:54.854 18:06:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.854 18:06:32 -- common/autotest_common.sh@10 -- # set +x 00:09:54.854 18:06:32 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:56.759 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:09:57.017 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:09:57.017 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:57.017 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:57.017 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:57.018 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:57.018 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:57.018 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:57.018 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:57.018 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:57.276 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:00.569 0000:d9:00.0 (8086 0a54): nvme -> vfio-pci 00:10:00.569 18:06:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:00.569 18:06:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.569 18:06:37 -- common/autotest_common.sh@10 -- # set +x 00:10:00.569 18:06:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:00.569 18:06:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:00.569 18:06:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:00.569 18:06:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:00.569 18:06:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:00.569 18:06:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:00.569 18:06:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:00.569 18:06:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:00.569 18:06:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:00.569 18:06:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:00.569 18:06:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:00.569 18:06:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:00.569 18:06:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:00.828 18:06:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:10:00.828 18:06:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d9:00.0 00:10:00.828 18:06:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:00.828 18:06:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d9:00.0/device 00:10:00.828 18:06:38 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:10:00.828 18:06:38 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:10:00.828 18:06:38 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:10:00.828 18:06:38 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:10:00.828 18:06:38 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d9:00.0 00:10:00.828 18:06:38 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d9:00.0 ]] 00:10:00.828 18:06:38 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3268734 00:10:00.828 18:06:38 -- common/autotest_common.sh@1585 -- # waitforlisten 3268734 00:10:00.828 18:06:38 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:00.828 18:06:38 -- common/autotest_common.sh@835 -- # '[' -z 3268734 ']' 00:10:00.828 18:06:38 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.828 18:06:38 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.828 18:06:38 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.828 18:06:38 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.828 18:06:38 -- common/autotest_common.sh@10 -- # set +x 00:10:00.828 [2024-11-26 18:06:38.077937] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:00.828 [2024-11-26 18:06:38.078024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268734 ] 00:10:00.828 [2024-11-26 18:06:38.145859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.828 [2024-11-26 18:06:38.195527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.087 18:06:38 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.087 18:06:38 -- common/autotest_common.sh@868 -- # return 0 00:10:01.087 18:06:38 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:10:01.087 18:06:38 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:10:01.087 18:06:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d9:00.0 00:10:04.367 nvme0n1 00:10:04.367 18:06:41 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:10:04.367 [2024-11-26 18:06:41.585534] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:10:04.367 request: 00:10:04.367 { 00:10:04.367 "nvme_ctrlr_name": "nvme0", 00:10:04.367 "password": "test", 00:10:04.367 "method": "bdev_nvme_opal_revert", 00:10:04.367 "req_id": 1 00:10:04.367 } 00:10:04.367 Got JSON-RPC error response 00:10:04.367 response: 00:10:04.367 { 00:10:04.367 "code": -32602, 00:10:04.367 "message": "Invalid parameters" 00:10:04.367 } 00:10:04.367 18:06:41 -- common/autotest_common.sh@1591 -- # true 00:10:04.367 18:06:41 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:10:04.367 18:06:41 -- common/autotest_common.sh@1595 -- # killprocess 3268734 00:10:04.367 18:06:41 -- common/autotest_common.sh@954 -- # '[' -z 3268734 ']' 00:10:04.367 18:06:41 -- common/autotest_common.sh@958 -- # kill -0 3268734 00:10:04.367 18:06:41 -- common/autotest_common.sh@959 -- # uname 00:10:04.367 18:06:41 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.367 18:06:41 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3268734 00:10:04.367 18:06:41 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.367 18:06:41 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.367 18:06:41 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3268734' 00:10:04.367 killing process with pid 3268734 00:10:04.367 18:06:41 -- common/autotest_common.sh@973 -- # kill 3268734 00:10:04.367 18:06:41 -- common/autotest_common.sh@978 -- # wait 3268734 00:10:08.553 18:06:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:08.553 18:06:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:08.553 18:06:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:08.553 18:06:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:08.553 18:06:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:08.553 18:06:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.553 18:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:08.553 18:06:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:08.553 18:06:45 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:10:08.553 18:06:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.553 18:06:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.553 18:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:08.554 ************************************ 00:10:08.554 START TEST env 00:10:08.554 ************************************ 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:10:08.554 * Looking for test storage... 00:10:08.554 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.554 18:06:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.554 18:06:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.554 18:06:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.554 18:06:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.554 18:06:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.554 18:06:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.554 18:06:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.554 18:06:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.554 18:06:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.554 18:06:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.554 18:06:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.554 18:06:45 env -- scripts/common.sh@344 -- # case "$op" in 00:10:08.554 18:06:45 env -- scripts/common.sh@345 -- # : 1 00:10:08.554 18:06:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.554 18:06:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.554 18:06:45 env -- scripts/common.sh@365 -- # decimal 1 00:10:08.554 18:06:45 env -- scripts/common.sh@353 -- # local d=1 00:10:08.554 18:06:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.554 18:06:45 env -- scripts/common.sh@355 -- # echo 1 00:10:08.554 18:06:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.554 18:06:45 env -- scripts/common.sh@366 -- # decimal 2 00:10:08.554 18:06:45 env -- scripts/common.sh@353 -- # local d=2 00:10:08.554 18:06:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.554 18:06:45 env -- scripts/common.sh@355 -- # echo 2 00:10:08.554 18:06:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.554 18:06:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.554 18:06:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.554 18:06:45 env -- scripts/common.sh@368 -- # return 0 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.554 --rc genhtml_branch_coverage=1 00:10:08.554 --rc genhtml_function_coverage=1 00:10:08.554 --rc genhtml_legend=1 00:10:08.554 --rc geninfo_all_blocks=1 00:10:08.554 --rc geninfo_unexecuted_blocks=1 00:10:08.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:08.554 ' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.554 --rc genhtml_branch_coverage=1 00:10:08.554 --rc genhtml_function_coverage=1 00:10:08.554 --rc genhtml_legend=1 00:10:08.554 --rc geninfo_all_blocks=1 00:10:08.554 --rc geninfo_unexecuted_blocks=1 00:10:08.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:08.554 ' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.554 --rc genhtml_branch_coverage=1 00:10:08.554 --rc genhtml_function_coverage=1 00:10:08.554 --rc genhtml_legend=1 00:10:08.554 --rc geninfo_all_blocks=1 00:10:08.554 --rc geninfo_unexecuted_blocks=1 00:10:08.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:08.554 ' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.554 --rc genhtml_branch_coverage=1 00:10:08.554 --rc genhtml_function_coverage=1 00:10:08.554 --rc genhtml_legend=1 00:10:08.554 --rc geninfo_all_blocks=1 00:10:08.554 --rc geninfo_unexecuted_blocks=1 00:10:08.554 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:08.554 ' 00:10:08.554 18:06:45 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.554 18:06:45 env -- common/autotest_common.sh@10 -- # set +x 00:10:08.554 ************************************ 00:10:08.554 START TEST env_memory 00:10:08.554 ************************************ 00:10:08.554 18:06:45 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:10:08.554 00:10:08.554 00:10:08.554 CUnit - A unit testing framework for C - Version 2.1-3 00:10:08.554 http://cunit.sourceforge.net/ 00:10:08.554 00:10:08.554 00:10:08.554 Suite: memory 00:10:08.554 Test: alloc and free memory map ...[2024-11-26 18:06:45.871168] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:08.554 passed 00:10:08.554 Test: mem map translation ...[2024-11-26 18:06:45.888238] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:08.554 [2024-11-26 18:06:45.888255] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:08.554 [2024-11-26 18:06:45.888293] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:08.554 [2024-11-26 18:06:45.888302] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:08.554 passed 00:10:08.554 Test: mem map registration ...[2024-11-26 18:06:45.917807] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:08.554 [2024-11-26 18:06:45.917823] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:08.554 passed 00:10:08.554 Test: mem map adjacent registrations ...passed 00:10:08.554 00:10:08.554 Run Summary: Type Total Ran Passed Failed Inactive 00:10:08.554 suites 1 1 n/a 0 0 00:10:08.554 tests 4 4 4 0 0 00:10:08.554 asserts 152 152 152 0 n/a 00:10:08.554 00:10:08.554 Elapsed time = 0.112 seconds 00:10:08.554 00:10:08.554 real 0m0.123s 00:10:08.554 user 0m0.116s 00:10:08.554 sys 0m0.006s 00:10:08.554 18:06:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.554 18:06:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:08.554 ************************************ 00:10:08.554 END TEST env_memory 00:10:08.554 ************************************ 00:10:08.554 18:06:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.554 18:06:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.554 18:06:45 env -- common/autotest_common.sh@10 -- # set +x 00:10:08.554 ************************************ 00:10:08.554 START TEST env_vtophys 00:10:08.554 ************************************ 00:10:08.554 18:06:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:08.816 EAL: lib.eal log level changed from notice to debug 00:10:08.816 EAL: Detected lcore 0 as core 0 on socket 0 00:10:08.816 EAL: Detected lcore 1 as core 1 on socket 0 00:10:08.816 EAL: Detected lcore 2 as core 2 on socket 0 00:10:08.816 EAL: Detected lcore 3 as core 3 on socket 0 00:10:08.816 EAL: Detected lcore 4 as core 4 on socket 0 00:10:08.816 EAL: Detected lcore 5 as core 5 on socket 0 00:10:08.816 EAL: Detected lcore 6 as core 6 on socket 0 00:10:08.816 EAL: Detected lcore 7 as core 8 on socket 0 00:10:08.816 EAL: Detected lcore 8 as core 9 on socket 0 00:10:08.816 EAL: Detected lcore 9 as core 10 on socket 0 00:10:08.816 EAL: Detected lcore 10 as core 11 on socket 0 00:10:08.816 EAL: Detected lcore 11 as core 12 on socket 0 00:10:08.816 EAL: Detected lcore 12 as core 13 on socket 0 00:10:08.816 EAL: Detected lcore 13 as core 14 on socket 0 00:10:08.816 EAL: Detected lcore 14 as core 16 on socket 0 00:10:08.816 EAL: Detected lcore 15 as core 17 on socket 0 00:10:08.816 EAL: Detected lcore 16 as core 18 on socket 0 00:10:08.816 EAL: Detected lcore 17 as core 19 on socket 0 00:10:08.816 EAL: Detected lcore 18 as core 20 on socket 0 00:10:08.816 EAL: Detected lcore 19 as core 21 on socket 0 00:10:08.816 EAL: Detected lcore 20 as core 22 on socket 0 00:10:08.816 EAL: Detected lcore 21 as core 24 on socket 0 00:10:08.816 EAL: Detected lcore 22 as core 25 on socket 0 00:10:08.816 EAL: Detected lcore 23 as core 26 on socket 0 00:10:08.816 EAL: Detected lcore 24 as core 27 on socket 0 00:10:08.816 EAL: Detected lcore 25 as core 28 on socket 0 00:10:08.816 EAL: Detected lcore 26 as core 29 on socket 0 00:10:08.816 EAL: Detected lcore 27 as core 30 on socket 0 00:10:08.816 EAL: Detected lcore 28 as core 0 on socket 1 00:10:08.817 EAL: Detected lcore 29 as core 1 on socket 1 00:10:08.817 EAL: Detected lcore 30 as core 2 on socket 1 00:10:08.817 EAL: Detected lcore 31 as core 3 on socket 1 00:10:08.817 EAL: Detected lcore 32 as core 4 on socket 1 00:10:08.817 EAL: Detected lcore 33 as core 5 on socket 1 00:10:08.817 EAL: Detected lcore 34 as core 6 on socket 1 00:10:08.817 EAL: Detected lcore 35 as core 8 on socket 1 00:10:08.817 EAL: Detected lcore 36 as core 9 on socket 1 00:10:08.817 EAL: Detected lcore 37 as core 10 on socket 1 00:10:08.817 EAL: Detected lcore 38 as core 11 on socket 1 00:10:08.817 EAL: Detected lcore 39 as core 12 on socket 1 00:10:08.817 EAL: Detected lcore 40 as core 13 on socket 1 00:10:08.817 EAL: Detected lcore 41 as core 14 on socket 1 00:10:08.817 EAL: Detected lcore 42 as core 16 on socket 1 00:10:08.817 EAL: Detected lcore 43 as core 17 on socket 1 00:10:08.817 EAL: Detected lcore 44 as core 18 on socket 1 00:10:08.817 EAL: Detected lcore 45 as core 19 on socket 1 00:10:08.817 EAL: Detected lcore 46 as core 20 on socket 1 00:10:08.817 EAL: Detected lcore 47 as core 21 on socket 1 00:10:08.817 EAL: Detected lcore 48 as core 22 on socket 1 00:10:08.817 EAL: Detected lcore 49 as core 24 on socket 1 00:10:08.817 EAL: Detected lcore 50 as core 25 on socket 1 00:10:08.817 EAL: Detected lcore 51 as core 26 on socket 1 00:10:08.817 EAL: Detected lcore 52 as core 27 on socket 1 00:10:08.817 EAL: Detected lcore 53 as core 28 on socket 1 00:10:08.817 EAL: Detected lcore 54 as core 29 on socket 1 00:10:08.817 EAL: Detected lcore 55 as core 30 on socket 1 00:10:08.817 EAL: Detected lcore 56 as core 0 on socket 0 00:10:08.817 EAL: Detected lcore 57 as core 1 on socket 0 00:10:08.817 EAL: Detected lcore 58 as core 2 on socket 0 00:10:08.817 EAL: Detected lcore 59 as core 3 on socket 0 00:10:08.817 EAL: Detected lcore 60 as core 4 on socket 0 00:10:08.817 EAL: Detected lcore 61 as core 5 on socket 0 00:10:08.817 EAL: Detected lcore 62 as core 6 on socket 0 00:10:08.817 EAL: Detected lcore 63 as core 8 on socket 0 00:10:08.817 EAL: Detected lcore 64 as core 9 on socket 0 00:10:08.817 EAL: Detected lcore 65 as core 10 on socket 0 00:10:08.817 EAL: Detected lcore 66 as core 11 on socket 0 00:10:08.817 EAL: Detected lcore 67 as core 12 on socket 0 00:10:08.817 EAL: Detected lcore 68 as core 13 on socket 0 00:10:08.817 EAL: Detected lcore 69 as core 14 on socket 0 00:10:08.817 EAL: Detected lcore 70 as core 16 on socket 0 00:10:08.817 EAL: Detected lcore 71 as core 17 on socket 0 00:10:08.817 EAL: Detected lcore 72 as core 18 on socket 0 00:10:08.817 EAL: Detected lcore 73 as core 19 on socket 0 00:10:08.817 EAL: Detected lcore 74 as core 20 on socket 0 00:10:08.817 EAL: Detected lcore 75 as core 21 on socket 0 00:10:08.817 EAL: Detected lcore 76 as core 22 on socket 0 00:10:08.817 EAL: Detected lcore 77 as core 24 on socket 0 00:10:08.817 EAL: Detected lcore 78 as core 25 on socket 0 00:10:08.817 EAL: Detected lcore 79 as core 26 on socket 0 00:10:08.817 EAL: Detected lcore 80 as core 27 on socket 0 00:10:08.817 EAL: Detected lcore 81 as core 28 on socket 0 00:10:08.817 EAL: Detected lcore 82 as core 29 on socket 0 00:10:08.817 EAL: Detected lcore 83 as core 30 on socket 0 00:10:08.817 EAL: Detected lcore 84 as core 0 on socket 1 00:10:08.817 EAL: Detected lcore 85 as core 1 on socket 1 00:10:08.817 EAL: Detected lcore 86 as core 2 on socket 1 00:10:08.817 EAL: Detected lcore 87 as core 3 on socket 1 00:10:08.817 EAL: Detected lcore 88 as core 4 on socket 1 00:10:08.817 EAL: Detected lcore 89 as core 5 on socket 1 00:10:08.817 EAL: Detected lcore 90 as core 6 on socket 1 00:10:08.817 EAL: Detected lcore 91 as core 8 on socket 1 00:10:08.817 EAL: Detected lcore 92 as core 9 on socket 1 00:10:08.817 EAL: Detected lcore 93 as core 10 on socket 1 00:10:08.817 EAL: Detected lcore 94 as core 11 on socket 1 00:10:08.817 EAL: Detected lcore 95 as core 12 on socket 1 00:10:08.817 EAL: Detected lcore 96 as core 13 on socket 1 00:10:08.817 EAL: Detected lcore 97 as core 14 on socket 1 00:10:08.817 EAL: Detected lcore 98 as core 16 on socket 1 00:10:08.817 EAL: Detected lcore 99 as core 17 on socket 1 00:10:08.817 EAL: Detected lcore 100 as core 18 on socket 1 00:10:08.817 EAL: Detected lcore 101 as core 19 on socket 1 00:10:08.817 EAL: Detected lcore 102 as core 20 on socket 1 00:10:08.817 EAL: Detected lcore 103 as core 21 on socket 1 00:10:08.817 EAL: Detected lcore 104 as core 22 on socket 1 00:10:08.817 EAL: Detected lcore 105 as core 24 on socket 1 00:10:08.817 EAL: Detected lcore 106 as core 25 on socket 1 00:10:08.817 EAL: Detected lcore 107 as core 26 on socket 1 00:10:08.817 EAL: Detected lcore 108 as core 27 on socket 1 00:10:08.817 EAL: Detected lcore 109 as core 28 on socket 1 00:10:08.817 EAL: Detected lcore 110 as core 29 on socket 1 00:10:08.817 EAL: Detected lcore 111 as core 30 on socket 1 00:10:08.817 EAL: Maximum logical cores by configuration: 128 00:10:08.817 EAL: Detected CPU lcores: 112 00:10:08.817 EAL: Detected NUMA nodes: 2 00:10:08.817 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:08.817 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:08.817 EAL: Checking presence of .so 'librte_eal.so' 00:10:08.817 EAL: Detected static linkage of DPDK 00:10:08.817 EAL: No shared files mode enabled, IPC will be disabled 00:10:08.817 EAL: Bus pci wants IOVA as 'DC' 00:10:08.817 EAL: Buses did not request a specific IOVA mode. 00:10:08.817 EAL: IOMMU is available, selecting IOVA as VA mode. 00:10:08.817 EAL: Selected IOVA mode 'VA' 00:10:08.817 EAL: Probing VFIO support... 00:10:08.817 EAL: IOMMU type 1 (Type 1) is supported 00:10:08.817 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:08.817 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:08.817 EAL: VFIO support initialized 00:10:08.817 EAL: Ask a virtual area of 0x2e000 bytes 00:10:08.817 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:08.817 EAL: Setting up physically contiguous memory... 00:10:08.817 EAL: Setting maximum number of open files to 524288 00:10:08.817 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:08.817 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:10:08.817 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:08.817 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.817 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:08.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:08.817 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.817 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:08.817 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:08.817 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.817 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:08.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:08.817 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.817 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:08.817 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:08.817 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.817 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:08.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:08.817 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.817 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:08.817 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:08.817 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.817 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:08.817 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:08.817 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.817 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:08.817 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:08.817 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:10:08.817 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.817 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:10:08.817 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:08.817 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.817 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:10:08.817 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:10:08.818 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.818 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:10:08.818 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:08.818 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.818 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:10:08.818 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:10:08.818 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.818 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:10:08.818 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:08.818 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.818 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:10:08.818 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:10:08.818 EAL: Ask a virtual area of 0x61000 bytes 00:10:08.818 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:10:08.818 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:08.818 EAL: Ask a virtual area of 0x400000000 bytes 00:10:08.818 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:10:08.818 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:10:08.818 EAL: Hugepages will be freed exactly as allocated. 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: TSC frequency is ~2700000 KHz 00:10:08.818 EAL: Main lcore 0 is ready (tid=7f577bdeaa00;cpuset=[0]) 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 0 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 2MB 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Mem event callback 'spdk:(nil)' registered 00:10:08.818 00:10:08.818 00:10:08.818 CUnit - A unit testing framework for C - Version 2.1-3 00:10:08.818 http://cunit.sourceforge.net/ 00:10:08.818 00:10:08.818 00:10:08.818 Suite: components_suite 00:10:08.818 Test: vtophys_malloc_test ...passed 00:10:08.818 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 4MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 4MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 6MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 6MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 10MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 10MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 18MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 18MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 34MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 34MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 66MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 66MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.818 EAL: Restoring previous memory policy: 4 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was expanded by 130MB 00:10:08.818 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.818 EAL: request: mp_malloc_sync 00:10:08.818 EAL: No shared files mode enabled, IPC is disabled 00:10:08.818 EAL: Heap on socket 0 was shrunk by 130MB 00:10:08.818 EAL: Trying to obtain current memory policy. 00:10:08.818 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:09.078 EAL: Restoring previous memory policy: 4 00:10:09.078 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.078 EAL: request: mp_malloc_sync 00:10:09.078 EAL: No shared files mode enabled, IPC is disabled 00:10:09.078 EAL: Heap on socket 0 was expanded by 258MB 00:10:09.078 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.078 EAL: request: mp_malloc_sync 00:10:09.078 EAL: No shared files mode enabled, IPC is disabled 00:10:09.078 EAL: Heap on socket 0 was shrunk by 258MB 00:10:09.078 EAL: Trying to obtain current memory policy. 00:10:09.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:09.078 EAL: Restoring previous memory policy: 4 00:10:09.078 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.078 EAL: request: mp_malloc_sync 00:10:09.078 EAL: No shared files mode enabled, IPC is disabled 00:10:09.078 EAL: Heap on socket 0 was expanded by 514MB 00:10:09.336 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.336 EAL: request: mp_malloc_sync 00:10:09.336 EAL: No shared files mode enabled, IPC is disabled 00:10:09.336 EAL: Heap on socket 0 was shrunk by 514MB 00:10:09.336 EAL: Trying to obtain current memory policy. 00:10:09.336 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:09.598 EAL: Restoring previous memory policy: 4 00:10:09.598 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.598 EAL: request: mp_malloc_sync 00:10:09.598 EAL: No shared files mode enabled, IPC is disabled 00:10:09.598 EAL: Heap on socket 0 was expanded by 1026MB 00:10:09.857 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.857 EAL: request: mp_malloc_sync 00:10:09.857 EAL: No shared files mode enabled, IPC is disabled 00:10:09.857 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:09.857 passed 00:10:09.857 00:10:09.857 Run Summary: Type Total Ran Passed Failed Inactive 00:10:09.857 suites 1 1 n/a 0 0 00:10:09.857 tests 2 2 2 0 0 00:10:09.857 asserts 497 497 497 0 n/a 00:10:09.857 00:10:09.857 Elapsed time = 1.113 seconds 00:10:09.857 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.857 EAL: request: mp_malloc_sync 00:10:09.857 EAL: No shared files mode enabled, IPC is disabled 00:10:09.857 EAL: Heap on socket 0 was shrunk by 2MB 00:10:09.857 EAL: No shared files mode enabled, IPC is disabled 00:10:09.857 EAL: No shared files mode enabled, IPC is disabled 00:10:09.857 EAL: No shared files mode enabled, IPC is disabled 00:10:09.857 00:10:09.857 real 0m1.243s 00:10:09.857 user 0m0.723s 00:10:09.857 sys 0m0.488s 00:10:09.857 18:06:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.857 18:06:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:09.857 ************************************ 00:10:09.857 END TEST env_vtophys 00:10:09.857 ************************************ 00:10:09.858 18:06:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:10:09.858 18:06:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.858 18:06:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.858 18:06:47 env -- common/autotest_common.sh@10 -- # set +x 00:10:09.858 ************************************ 00:10:09.858 START TEST env_pci 00:10:09.858 ************************************ 00:10:09.858 18:06:47 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:10:10.116 00:10:10.116 00:10:10.116 CUnit - A unit testing framework for C - Version 2.1-3 00:10:10.116 http://cunit.sourceforge.net/ 00:10:10.116 00:10:10.116 00:10:10.116 Suite: pci 00:10:10.116 Test: pci_hook ...[2024-11-26 18:06:47.314366] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3270481 has claimed it 00:10:10.116 EAL: Cannot find device (10000:00:01.0) 00:10:10.116 EAL: Failed to attach device on primary process 00:10:10.116 passed 00:10:10.116 00:10:10.116 Run Summary: Type Total Ran Passed Failed Inactive 00:10:10.116 suites 1 1 n/a 0 0 00:10:10.116 tests 1 1 1 0 0 00:10:10.116 asserts 25 25 25 0 n/a 00:10:10.116 00:10:10.116 Elapsed time = 0.026 seconds 00:10:10.116 00:10:10.116 real 0m0.043s 00:10:10.116 user 0m0.015s 00:10:10.116 sys 0m0.028s 00:10:10.116 18:06:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.116 18:06:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:10.116 ************************************ 00:10:10.116 END TEST env_pci 00:10:10.116 ************************************ 00:10:10.116 18:06:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:10.116 18:06:47 env -- env/env.sh@15 -- # uname 00:10:10.116 18:06:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:10.116 18:06:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:10.116 18:06:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:10.116 18:06:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.116 18:06:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.116 18:06:47 env -- common/autotest_common.sh@10 -- # set +x 00:10:10.116 ************************************ 00:10:10.116 START TEST env_dpdk_post_init 00:10:10.116 ************************************ 00:10:10.116 18:06:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:10.116 EAL: Detected CPU lcores: 112 00:10:10.116 EAL: Detected NUMA nodes: 2 00:10:10.116 EAL: Detected static linkage of DPDK 00:10:10.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:10.116 EAL: Selected IOVA mode 'VA' 00:10:10.116 EAL: VFIO support initialized 00:10:10.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:10.116 EAL: Using IOMMU type 1 (Type 1) 00:10:10.116 EAL: Ignore mapping IO port bar(1) 00:10:10.116 EAL: Ignore mapping IO port bar(5) 00:10:10.116 EAL: Probe PCI driver: spdk_vmd (8086:201d) device: 0000:5d:05.5 (socket 0) 00:10:10.116 EAL: Ignore mapping IO port bar(1) 00:10:10.116 EAL: Ignore mapping IO port bar(5) 00:10:10.116 EAL: Probe PCI driver: spdk_vmd (8086:201d) device: 0000:ae:05.5 (socket 1) 00:10:11.051 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d9:00.0 (socket 1) 00:10:16.315 EAL: Releasing PCI mapped resource for 0000:d9:00.0 00:10:16.315 EAL: Calling pci_unmap_resource for 0000:d9:00.0 at 0x202019200000 00:10:16.573 Starting DPDK initialization... 00:10:16.573 Starting SPDK post initialization... 00:10:16.573 SPDK NVMe probe 00:10:16.573 Attaching to 0000:d9:00.0 00:10:16.573 Attached to 0000:d9:00.0 00:10:16.573 Cleaning up... 00:10:16.573 00:10:16.573 real 0m6.510s 00:10:16.573 user 0m4.924s 00:10:16.573 sys 0m0.810s 00:10:16.573 18:06:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.573 18:06:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:16.573 ************************************ 00:10:16.573 END TEST env_dpdk_post_init 00:10:16.573 ************************************ 00:10:16.573 18:06:53 env -- env/env.sh@26 -- # uname 00:10:16.573 18:06:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:16.573 18:06:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:16.573 18:06:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.573 18:06:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.573 18:06:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:16.573 ************************************ 00:10:16.573 START TEST env_mem_callbacks 00:10:16.573 ************************************ 00:10:16.573 18:06:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:16.573 EAL: Detected CPU lcores: 112 00:10:16.573 EAL: Detected NUMA nodes: 2 00:10:16.573 EAL: Detected static linkage of DPDK 00:10:16.573 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:16.831 EAL: Selected IOVA mode 'VA' 00:10:16.831 EAL: VFIO support initialized 00:10:16.831 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:16.831 00:10:16.831 00:10:16.831 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.831 http://cunit.sourceforge.net/ 00:10:16.831 00:10:16.831 00:10:16.831 Suite: memory 00:10:16.831 Test: test ... 00:10:16.831 register 0x200000200000 2097152 00:10:16.831 malloc 3145728 00:10:16.831 register 0x200000400000 4194304 00:10:16.831 buf 0x200000500000 len 3145728 PASSED 00:10:16.831 malloc 64 00:10:16.831 buf 0x2000004fff40 len 64 PASSED 00:10:16.831 malloc 4194304 00:10:16.831 register 0x200000800000 6291456 00:10:16.831 buf 0x200000a00000 len 4194304 PASSED 00:10:16.831 free 0x200000500000 3145728 00:10:16.831 free 0x2000004fff40 64 00:10:16.831 unregister 0x200000400000 4194304 PASSED 00:10:16.831 free 0x200000a00000 4194304 00:10:16.831 unregister 0x200000800000 6291456 PASSED 00:10:16.831 malloc 8388608 00:10:16.831 register 0x200000400000 10485760 00:10:16.831 buf 0x200000600000 len 8388608 PASSED 00:10:16.831 free 0x200000600000 8388608 00:10:16.831 unregister 0x200000400000 10485760 PASSED 00:10:16.831 passed 00:10:16.831 00:10:16.831 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.831 suites 1 1 n/a 0 0 00:10:16.831 tests 1 1 1 0 0 00:10:16.831 asserts 15 15 15 0 n/a 00:10:16.831 00:10:16.831 Elapsed time = 0.007 seconds 00:10:16.831 00:10:16.831 real 0m0.059s 00:10:16.831 user 0m0.014s 00:10:16.831 sys 0m0.044s 00:10:16.831 18:06:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.831 18:06:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:16.831 ************************************ 00:10:16.831 END TEST env_mem_callbacks 00:10:16.831 ************************************ 00:10:16.831 00:10:16.831 real 0m8.431s 00:10:16.831 user 0m6.002s 00:10:16.831 sys 0m1.645s 00:10:16.831 18:06:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.831 18:06:54 env -- common/autotest_common.sh@10 -- # set +x 00:10:16.831 ************************************ 00:10:16.831 END TEST env 00:10:16.831 ************************************ 00:10:16.831 18:06:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:10:16.831 18:06:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.831 18:06:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.831 18:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:16.831 ************************************ 00:10:16.832 START TEST rpc 00:10:16.832 ************************************ 00:10:16.832 18:06:54 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:10:16.832 * Looking for test storage... 00:10:16.832 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:16.832 18:06:54 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.832 18:06:54 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.832 18:06:54 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.090 18:06:54 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.090 18:06:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.090 18:06:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.090 18:06:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.090 18:06:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.090 18:06:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.090 18:06:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.090 18:06:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.090 18:06:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.090 18:06:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.090 18:06:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.090 18:06:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.090 18:06:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:17.091 18:06:54 rpc -- scripts/common.sh@345 -- # : 1 00:10:17.091 18:06:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.091 18:06:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.091 18:06:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:17.091 18:06:54 rpc -- scripts/common.sh@353 -- # local d=1 00:10:17.091 18:06:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.091 18:06:54 rpc -- scripts/common.sh@355 -- # echo 1 00:10:17.091 18:06:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.091 18:06:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:17.091 18:06:54 rpc -- scripts/common.sh@353 -- # local d=2 00:10:17.091 18:06:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.091 18:06:54 rpc -- scripts/common.sh@355 -- # echo 2 00:10:17.091 18:06:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.091 18:06:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.091 18:06:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.091 18:06:54 rpc -- scripts/common.sh@368 -- # return 0 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.091 --rc genhtml_branch_coverage=1 00:10:17.091 --rc genhtml_function_coverage=1 00:10:17.091 --rc genhtml_legend=1 00:10:17.091 --rc geninfo_all_blocks=1 00:10:17.091 --rc geninfo_unexecuted_blocks=1 00:10:17.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:17.091 ' 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.091 --rc genhtml_branch_coverage=1 00:10:17.091 --rc genhtml_function_coverage=1 00:10:17.091 --rc genhtml_legend=1 00:10:17.091 --rc geninfo_all_blocks=1 00:10:17.091 --rc geninfo_unexecuted_blocks=1 00:10:17.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:17.091 ' 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.091 --rc genhtml_branch_coverage=1 00:10:17.091 --rc genhtml_function_coverage=1 00:10:17.091 --rc genhtml_legend=1 00:10:17.091 --rc geninfo_all_blocks=1 00:10:17.091 --rc geninfo_unexecuted_blocks=1 00:10:17.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:17.091 ' 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.091 --rc genhtml_branch_coverage=1 00:10:17.091 --rc genhtml_function_coverage=1 00:10:17.091 --rc genhtml_legend=1 00:10:17.091 --rc geninfo_all_blocks=1 00:10:17.091 --rc geninfo_unexecuted_blocks=1 00:10:17.091 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:17.091 ' 00:10:17.091 18:06:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3271882 00:10:17.091 18:06:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:17.091 18:06:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.091 18:06:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3271882 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 3271882 ']' 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.091 18:06:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.091 [2024-11-26 18:06:54.330156] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:17.091 [2024-11-26 18:06:54.330238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271882 ] 00:10:17.091 [2024-11-26 18:06:54.409001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.091 [2024-11-26 18:06:54.455161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:17.091 [2024-11-26 18:06:54.455203] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3271882' to capture a snapshot of events at runtime. 00:10:17.091 [2024-11-26 18:06:54.455212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.091 [2024-11-26 18:06:54.455220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.091 [2024-11-26 18:06:54.455226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3271882 for offline analysis/debug. 00:10:17.091 [2024-11-26 18:06:54.455737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.350 18:06:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.350 18:06:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:17.350 18:06:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:17.350 18:06:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:17.350 18:06:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:17.350 18:06:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:17.350 18:06:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.350 18:06:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.350 18:06:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.350 ************************************ 00:10:17.350 START TEST rpc_integrity 00:10:17.350 ************************************ 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:17.350 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.350 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:17.609 { 00:10:17.609 "name": "Malloc0", 00:10:17.609 "aliases": [ 00:10:17.609 "4a1d6050-5aae-4fd0-808d-85976e0ab1fa" 00:10:17.609 ], 00:10:17.609 "product_name": "Malloc disk", 00:10:17.609 "block_size": 512, 00:10:17.609 "num_blocks": 16384, 00:10:17.609 "uuid": "4a1d6050-5aae-4fd0-808d-85976e0ab1fa", 00:10:17.609 "assigned_rate_limits": { 00:10:17.609 "rw_ios_per_sec": 0, 00:10:17.609 "rw_mbytes_per_sec": 0, 00:10:17.609 "r_mbytes_per_sec": 0, 00:10:17.609 "w_mbytes_per_sec": 0 00:10:17.609 }, 00:10:17.609 "claimed": false, 00:10:17.609 "zoned": false, 00:10:17.609 "supported_io_types": { 00:10:17.609 "read": true, 00:10:17.609 "write": true, 00:10:17.609 "unmap": true, 00:10:17.609 "flush": true, 00:10:17.609 "reset": true, 00:10:17.609 "nvme_admin": false, 00:10:17.609 "nvme_io": false, 00:10:17.609 "nvme_io_md": false, 00:10:17.609 "write_zeroes": true, 00:10:17.609 "zcopy": true, 00:10:17.609 "get_zone_info": false, 00:10:17.609 "zone_management": false, 00:10:17.609 "zone_append": false, 00:10:17.609 "compare": false, 00:10:17.609 "compare_and_write": false, 00:10:17.609 "abort": true, 00:10:17.609 "seek_hole": false, 00:10:17.609 "seek_data": false, 00:10:17.609 "copy": true, 00:10:17.609 "nvme_iov_md": false 00:10:17.609 }, 00:10:17.609 "memory_domains": [ 00:10:17.609 { 00:10:17.609 "dma_device_id": "system", 00:10:17.609 "dma_device_type": 1 00:10:17.609 }, 00:10:17.609 { 00:10:17.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.609 "dma_device_type": 2 00:10:17.609 } 00:10:17.609 ], 00:10:17.609 "driver_specific": {} 00:10:17.609 } 00:10:17.609 ]' 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.609 [2024-11-26 18:06:54.857039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:17.609 [2024-11-26 18:06:54.857075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.609 [2024-11-26 18:06:54.857092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f620c0 00:10:17.609 [2024-11-26 18:06:54.857101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.609 [2024-11-26 18:06:54.858163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.609 [2024-11-26 18:06:54.858185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:17.609 Passthru0 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.609 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.609 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:17.609 { 00:10:17.609 "name": "Malloc0", 00:10:17.609 "aliases": [ 00:10:17.609 "4a1d6050-5aae-4fd0-808d-85976e0ab1fa" 00:10:17.609 ], 00:10:17.609 "product_name": "Malloc disk", 00:10:17.609 "block_size": 512, 00:10:17.609 "num_blocks": 16384, 00:10:17.609 "uuid": "4a1d6050-5aae-4fd0-808d-85976e0ab1fa", 00:10:17.609 "assigned_rate_limits": { 00:10:17.609 "rw_ios_per_sec": 0, 00:10:17.609 "rw_mbytes_per_sec": 0, 00:10:17.609 "r_mbytes_per_sec": 0, 00:10:17.609 "w_mbytes_per_sec": 0 00:10:17.609 }, 00:10:17.609 "claimed": true, 00:10:17.609 "claim_type": "exclusive_write", 00:10:17.609 "zoned": false, 00:10:17.609 "supported_io_types": { 00:10:17.609 "read": true, 00:10:17.609 "write": true, 00:10:17.609 "unmap": true, 00:10:17.609 "flush": true, 00:10:17.609 "reset": true, 00:10:17.609 "nvme_admin": false, 00:10:17.609 "nvme_io": false, 00:10:17.609 "nvme_io_md": false, 00:10:17.609 "write_zeroes": true, 00:10:17.609 "zcopy": true, 00:10:17.609 "get_zone_info": false, 00:10:17.609 "zone_management": false, 00:10:17.609 "zone_append": false, 00:10:17.610 "compare": false, 00:10:17.610 "compare_and_write": false, 00:10:17.610 "abort": true, 00:10:17.610 "seek_hole": false, 00:10:17.610 "seek_data": false, 00:10:17.610 "copy": true, 00:10:17.610 "nvme_iov_md": false 00:10:17.610 }, 00:10:17.610 "memory_domains": [ 00:10:17.610 { 00:10:17.610 "dma_device_id": "system", 00:10:17.610 "dma_device_type": 1 00:10:17.610 }, 00:10:17.610 { 00:10:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.610 "dma_device_type": 2 00:10:17.610 } 00:10:17.610 ], 00:10:17.610 "driver_specific": {} 00:10:17.610 }, 00:10:17.610 { 00:10:17.610 "name": "Passthru0", 00:10:17.610 "aliases": [ 00:10:17.610 "33e609ed-a6b0-5fc2-a77e-28dacd1c9fec" 00:10:17.610 ], 00:10:17.610 "product_name": "passthru", 00:10:17.610 "block_size": 512, 00:10:17.610 "num_blocks": 16384, 00:10:17.610 "uuid": "33e609ed-a6b0-5fc2-a77e-28dacd1c9fec", 00:10:17.610 "assigned_rate_limits": { 00:10:17.610 "rw_ios_per_sec": 0, 00:10:17.610 "rw_mbytes_per_sec": 0, 00:10:17.610 "r_mbytes_per_sec": 0, 00:10:17.610 "w_mbytes_per_sec": 0 00:10:17.610 }, 00:10:17.610 "claimed": false, 00:10:17.610 "zoned": false, 00:10:17.610 "supported_io_types": { 00:10:17.610 "read": true, 00:10:17.610 "write": true, 00:10:17.610 "unmap": true, 00:10:17.610 "flush": true, 00:10:17.610 "reset": true, 00:10:17.610 "nvme_admin": false, 00:10:17.610 "nvme_io": false, 00:10:17.610 "nvme_io_md": false, 00:10:17.610 "write_zeroes": true, 00:10:17.610 "zcopy": true, 00:10:17.610 "get_zone_info": false, 00:10:17.610 "zone_management": false, 00:10:17.610 "zone_append": false, 00:10:17.610 "compare": false, 00:10:17.610 "compare_and_write": false, 00:10:17.610 "abort": true, 00:10:17.610 "seek_hole": false, 00:10:17.610 "seek_data": false, 00:10:17.610 "copy": true, 00:10:17.610 "nvme_iov_md": false 00:10:17.610 }, 00:10:17.610 "memory_domains": [ 00:10:17.610 { 00:10:17.610 "dma_device_id": "system", 00:10:17.610 "dma_device_type": 1 00:10:17.610 }, 00:10:17.610 { 00:10:17.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.610 "dma_device_type": 2 00:10:17.610 } 00:10:17.610 ], 00:10:17.610 "driver_specific": { 00:10:17.610 "passthru": { 00:10:17.610 "name": "Passthru0", 00:10:17.610 "base_bdev_name": "Malloc0" 00:10:17.610 } 00:10:17.610 } 00:10:17.610 } 00:10:17.610 ]' 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.610 18:06:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:17.610 18:06:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:17.610 18:06:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:17.610 00:10:17.610 real 0m0.285s 00:10:17.610 user 0m0.187s 00:10:17.610 sys 0m0.036s 00:10:17.610 18:06:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.610 18:06:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.610 ************************************ 00:10:17.610 END TEST rpc_integrity 00:10:17.610 ************************************ 00:10:17.610 18:06:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:17.610 18:06:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.610 18:06:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.610 18:06:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.868 ************************************ 00:10:17.869 START TEST rpc_plugins 00:10:17.869 ************************************ 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:17.869 { 00:10:17.869 "name": "Malloc1", 00:10:17.869 "aliases": [ 00:10:17.869 "281d1527-942e-4ac8-8d7d-c63e23058a4b" 00:10:17.869 ], 00:10:17.869 "product_name": "Malloc disk", 00:10:17.869 "block_size": 4096, 00:10:17.869 "num_blocks": 256, 00:10:17.869 "uuid": "281d1527-942e-4ac8-8d7d-c63e23058a4b", 00:10:17.869 "assigned_rate_limits": { 00:10:17.869 "rw_ios_per_sec": 0, 00:10:17.869 "rw_mbytes_per_sec": 0, 00:10:17.869 "r_mbytes_per_sec": 0, 00:10:17.869 "w_mbytes_per_sec": 0 00:10:17.869 }, 00:10:17.869 "claimed": false, 00:10:17.869 "zoned": false, 00:10:17.869 "supported_io_types": { 00:10:17.869 "read": true, 00:10:17.869 "write": true, 00:10:17.869 "unmap": true, 00:10:17.869 "flush": true, 00:10:17.869 "reset": true, 00:10:17.869 "nvme_admin": false, 00:10:17.869 "nvme_io": false, 00:10:17.869 "nvme_io_md": false, 00:10:17.869 "write_zeroes": true, 00:10:17.869 "zcopy": true, 00:10:17.869 "get_zone_info": false, 00:10:17.869 "zone_management": false, 00:10:17.869 "zone_append": false, 00:10:17.869 "compare": false, 00:10:17.869 "compare_and_write": false, 00:10:17.869 "abort": true, 00:10:17.869 "seek_hole": false, 00:10:17.869 "seek_data": false, 00:10:17.869 "copy": true, 00:10:17.869 "nvme_iov_md": false 00:10:17.869 }, 00:10:17.869 "memory_domains": [ 00:10:17.869 { 00:10:17.869 "dma_device_id": "system", 00:10:17.869 "dma_device_type": 1 00:10:17.869 }, 00:10:17.869 { 00:10:17.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.869 "dma_device_type": 2 00:10:17.869 } 00:10:17.869 ], 00:10:17.869 "driver_specific": {} 00:10:17.869 } 00:10:17.869 ]' 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:17.869 18:06:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:17.869 00:10:17.869 real 0m0.137s 00:10:17.869 user 0m0.096s 00:10:17.869 sys 0m0.014s 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 ************************************ 00:10:17.869 END TEST rpc_plugins 00:10:17.869 ************************************ 00:10:17.869 18:06:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:17.869 18:06:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.869 18:06:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.869 18:06:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 ************************************ 00:10:17.869 START TEST rpc_trace_cmd_test 00:10:17.869 ************************************ 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:17.869 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3271882", 00:10:17.869 "tpoint_group_mask": "0x8", 00:10:17.869 "iscsi_conn": { 00:10:17.869 "mask": "0x2", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "scsi": { 00:10:17.869 "mask": "0x4", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "bdev": { 00:10:17.869 "mask": "0x8", 00:10:17.869 "tpoint_mask": "0xffffffffffffffff" 00:10:17.869 }, 00:10:17.869 "nvmf_rdma": { 00:10:17.869 "mask": "0x10", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "nvmf_tcp": { 00:10:17.869 "mask": "0x20", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "ftl": { 00:10:17.869 "mask": "0x40", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "blobfs": { 00:10:17.869 "mask": "0x80", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "dsa": { 00:10:17.869 "mask": "0x200", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "thread": { 00:10:17.869 "mask": "0x400", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "nvme_pcie": { 00:10:17.869 "mask": "0x800", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "iaa": { 00:10:17.869 "mask": "0x1000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "nvme_tcp": { 00:10:17.869 "mask": "0x2000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "bdev_nvme": { 00:10:17.869 "mask": "0x4000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "sock": { 00:10:17.869 "mask": "0x8000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "blob": { 00:10:17.869 "mask": "0x10000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "bdev_raid": { 00:10:17.869 "mask": "0x20000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 }, 00:10:17.869 "scheduler": { 00:10:17.869 "mask": "0x40000", 00:10:17.869 "tpoint_mask": "0x0" 00:10:17.869 } 00:10:17.869 }' 00:10:17.869 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:18.128 00:10:18.128 real 0m0.232s 00:10:18.128 user 0m0.199s 00:10:18.128 sys 0m0.025s 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.128 18:06:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.128 ************************************ 00:10:18.128 END TEST rpc_trace_cmd_test 00:10:18.128 ************************************ 00:10:18.128 18:06:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:18.128 18:06:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:18.128 18:06:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:18.128 18:06:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.128 18:06:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.128 18:06:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.386 ************************************ 00:10:18.387 START TEST rpc_daemon_integrity 00:10:18.387 ************************************ 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:18.387 { 00:10:18.387 "name": "Malloc2", 00:10:18.387 "aliases": [ 00:10:18.387 "a1ef8e44-b740-4dd7-858a-15ed262a7dc2" 00:10:18.387 ], 00:10:18.387 "product_name": "Malloc disk", 00:10:18.387 "block_size": 512, 00:10:18.387 "num_blocks": 16384, 00:10:18.387 "uuid": "a1ef8e44-b740-4dd7-858a-15ed262a7dc2", 00:10:18.387 "assigned_rate_limits": { 00:10:18.387 "rw_ios_per_sec": 0, 00:10:18.387 "rw_mbytes_per_sec": 0, 00:10:18.387 "r_mbytes_per_sec": 0, 00:10:18.387 "w_mbytes_per_sec": 0 00:10:18.387 }, 00:10:18.387 "claimed": false, 00:10:18.387 "zoned": false, 00:10:18.387 "supported_io_types": { 00:10:18.387 "read": true, 00:10:18.387 "write": true, 00:10:18.387 "unmap": true, 00:10:18.387 "flush": true, 00:10:18.387 "reset": true, 00:10:18.387 "nvme_admin": false, 00:10:18.387 "nvme_io": false, 00:10:18.387 "nvme_io_md": false, 00:10:18.387 "write_zeroes": true, 00:10:18.387 "zcopy": true, 00:10:18.387 "get_zone_info": false, 00:10:18.387 "zone_management": false, 00:10:18.387 "zone_append": false, 00:10:18.387 "compare": false, 00:10:18.387 "compare_and_write": false, 00:10:18.387 "abort": true, 00:10:18.387 "seek_hole": false, 00:10:18.387 "seek_data": false, 00:10:18.387 "copy": true, 00:10:18.387 "nvme_iov_md": false 00:10:18.387 }, 00:10:18.387 "memory_domains": [ 00:10:18.387 { 00:10:18.387 "dma_device_id": "system", 00:10:18.387 "dma_device_type": 1 00:10:18.387 }, 00:10:18.387 { 00:10:18.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.387 "dma_device_type": 2 00:10:18.387 } 00:10:18.387 ], 00:10:18.387 "driver_specific": {} 00:10:18.387 } 00:10:18.387 ]' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 [2024-11-26 18:06:55.695235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:18.387 [2024-11-26 18:06:55.695269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.387 [2024-11-26 18:06:55.695286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6083ad0 00:10:18.387 [2024-11-26 18:06:55.695295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.387 [2024-11-26 18:06:55.696251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.387 [2024-11-26 18:06:55.696273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:18.387 Passthru0 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:18.387 { 00:10:18.387 "name": "Malloc2", 00:10:18.387 "aliases": [ 00:10:18.387 "a1ef8e44-b740-4dd7-858a-15ed262a7dc2" 00:10:18.387 ], 00:10:18.387 "product_name": "Malloc disk", 00:10:18.387 "block_size": 512, 00:10:18.387 "num_blocks": 16384, 00:10:18.387 "uuid": "a1ef8e44-b740-4dd7-858a-15ed262a7dc2", 00:10:18.387 "assigned_rate_limits": { 00:10:18.387 "rw_ios_per_sec": 0, 00:10:18.387 "rw_mbytes_per_sec": 0, 00:10:18.387 "r_mbytes_per_sec": 0, 00:10:18.387 "w_mbytes_per_sec": 0 00:10:18.387 }, 00:10:18.387 "claimed": true, 00:10:18.387 "claim_type": "exclusive_write", 00:10:18.387 "zoned": false, 00:10:18.387 "supported_io_types": { 00:10:18.387 "read": true, 00:10:18.387 "write": true, 00:10:18.387 "unmap": true, 00:10:18.387 "flush": true, 00:10:18.387 "reset": true, 00:10:18.387 "nvme_admin": false, 00:10:18.387 "nvme_io": false, 00:10:18.387 "nvme_io_md": false, 00:10:18.387 "write_zeroes": true, 00:10:18.387 "zcopy": true, 00:10:18.387 "get_zone_info": false, 00:10:18.387 "zone_management": false, 00:10:18.387 "zone_append": false, 00:10:18.387 "compare": false, 00:10:18.387 "compare_and_write": false, 00:10:18.387 "abort": true, 00:10:18.387 "seek_hole": false, 00:10:18.387 "seek_data": false, 00:10:18.387 "copy": true, 00:10:18.387 "nvme_iov_md": false 00:10:18.387 }, 00:10:18.387 "memory_domains": [ 00:10:18.387 { 00:10:18.387 "dma_device_id": "system", 00:10:18.387 "dma_device_type": 1 00:10:18.387 }, 00:10:18.387 { 00:10:18.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.387 "dma_device_type": 2 00:10:18.387 } 00:10:18.387 ], 00:10:18.387 "driver_specific": {} 00:10:18.387 }, 00:10:18.387 { 00:10:18.387 "name": "Passthru0", 00:10:18.387 "aliases": [ 00:10:18.387 "417b4813-0804-5d85-a8e6-7d62b1fd3812" 00:10:18.387 ], 00:10:18.387 "product_name": "passthru", 00:10:18.387 "block_size": 512, 00:10:18.387 "num_blocks": 16384, 00:10:18.387 "uuid": "417b4813-0804-5d85-a8e6-7d62b1fd3812", 00:10:18.387 "assigned_rate_limits": { 00:10:18.387 "rw_ios_per_sec": 0, 00:10:18.387 "rw_mbytes_per_sec": 0, 00:10:18.387 "r_mbytes_per_sec": 0, 00:10:18.387 "w_mbytes_per_sec": 0 00:10:18.387 }, 00:10:18.387 "claimed": false, 00:10:18.387 "zoned": false, 00:10:18.387 "supported_io_types": { 00:10:18.387 "read": true, 00:10:18.387 "write": true, 00:10:18.387 "unmap": true, 00:10:18.387 "flush": true, 00:10:18.387 "reset": true, 00:10:18.387 "nvme_admin": false, 00:10:18.387 "nvme_io": false, 00:10:18.387 "nvme_io_md": false, 00:10:18.387 "write_zeroes": true, 00:10:18.387 "zcopy": true, 00:10:18.387 "get_zone_info": false, 00:10:18.387 "zone_management": false, 00:10:18.387 "zone_append": false, 00:10:18.387 "compare": false, 00:10:18.387 "compare_and_write": false, 00:10:18.387 "abort": true, 00:10:18.387 "seek_hole": false, 00:10:18.387 "seek_data": false, 00:10:18.387 "copy": true, 00:10:18.387 "nvme_iov_md": false 00:10:18.387 }, 00:10:18.387 "memory_domains": [ 00:10:18.387 { 00:10:18.387 "dma_device_id": "system", 00:10:18.387 "dma_device_type": 1 00:10:18.387 }, 00:10:18.387 { 00:10:18.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.387 "dma_device_type": 2 00:10:18.387 } 00:10:18.387 ], 00:10:18.387 "driver_specific": { 00:10:18.387 "passthru": { 00:10:18.387 "name": "Passthru0", 00:10:18.387 "base_bdev_name": "Malloc2" 00:10:18.387 } 00:10:18.387 } 00:10:18.387 } 00:10:18.387 ]' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:18.387 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:18.646 18:06:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:18.646 00:10:18.646 real 0m0.263s 00:10:18.646 user 0m0.180s 00:10:18.646 sys 0m0.031s 00:10:18.646 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.646 18:06:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.646 ************************************ 00:10:18.646 END TEST rpc_daemon_integrity 00:10:18.646 ************************************ 00:10:18.646 18:06:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:18.646 18:06:55 rpc -- rpc/rpc.sh@84 -- # killprocess 3271882 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 3271882 ']' 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@958 -- # kill -0 3271882 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@959 -- # uname 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271882 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271882' 00:10:18.646 killing process with pid 3271882 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@973 -- # kill 3271882 00:10:18.646 18:06:55 rpc -- common/autotest_common.sh@978 -- # wait 3271882 00:10:18.905 00:10:18.905 real 0m2.159s 00:10:18.905 user 0m2.811s 00:10:18.905 sys 0m0.707s 00:10:18.905 18:06:56 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.905 18:06:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.905 ************************************ 00:10:18.905 END TEST rpc 00:10:18.905 ************************************ 00:10:18.905 18:06:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:18.905 18:06:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.905 18:06:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.905 18:06:56 -- common/autotest_common.sh@10 -- # set +x 00:10:18.905 ************************************ 00:10:18.905 START TEST skip_rpc 00:10:18.905 ************************************ 00:10:18.905 18:06:56 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:19.164 * Looking for test storage... 00:10:19.164 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.164 18:06:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.164 --rc genhtml_branch_coverage=1 00:10:19.164 --rc genhtml_function_coverage=1 00:10:19.164 --rc genhtml_legend=1 00:10:19.164 --rc geninfo_all_blocks=1 00:10:19.164 --rc geninfo_unexecuted_blocks=1 00:10:19.164 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:19.164 ' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.164 --rc genhtml_branch_coverage=1 00:10:19.164 --rc genhtml_function_coverage=1 00:10:19.164 --rc genhtml_legend=1 00:10:19.164 --rc geninfo_all_blocks=1 00:10:19.164 --rc geninfo_unexecuted_blocks=1 00:10:19.164 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:19.164 ' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.164 --rc genhtml_branch_coverage=1 00:10:19.164 --rc genhtml_function_coverage=1 00:10:19.164 --rc genhtml_legend=1 00:10:19.164 --rc geninfo_all_blocks=1 00:10:19.164 --rc geninfo_unexecuted_blocks=1 00:10:19.164 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:19.164 ' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.164 --rc genhtml_branch_coverage=1 00:10:19.164 --rc genhtml_function_coverage=1 00:10:19.164 --rc genhtml_legend=1 00:10:19.164 --rc geninfo_all_blocks=1 00:10:19.164 --rc geninfo_unexecuted_blocks=1 00:10:19.164 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:19.164 ' 00:10:19.164 18:06:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:19.164 18:06:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:19.164 18:06:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.164 18:06:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.164 ************************************ 00:10:19.164 START TEST skip_rpc 00:10:19.164 ************************************ 00:10:19.164 18:06:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:19.164 18:06:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3272383 00:10:19.164 18:06:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.164 18:06:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:19.164 18:06:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:19.164 [2024-11-26 18:06:56.584187] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:19.164 [2024-11-26 18:06:56.584262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272383 ] 00:10:19.423 [2024-11-26 18:06:56.659307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.423 [2024-11-26 18:06:56.707757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3272383 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3272383 ']' 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3272383 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272383 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272383' 00:10:24.689 killing process with pid 3272383 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3272383 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3272383 00:10:24.689 00:10:24.689 real 0m5.422s 00:10:24.689 user 0m5.147s 00:10:24.689 sys 0m0.301s 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.689 18:07:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.689 ************************************ 00:10:24.689 END TEST skip_rpc 00:10:24.689 ************************************ 00:10:24.689 18:07:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:24.689 18:07:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.689 18:07:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.689 18:07:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.689 ************************************ 00:10:24.689 START TEST skip_rpc_with_json 00:10:24.689 ************************************ 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3273384 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3273384 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3273384 ']' 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.689 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:24.689 [2024-11-26 18:07:02.072741] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:24.689 [2024-11-26 18:07:02.072789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273384 ] 00:10:24.948 [2024-11-26 18:07:02.147283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.948 [2024-11-26 18:07:02.199305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:25.207 [2024-11-26 18:07:02.461996] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:25.207 request: 00:10:25.207 { 00:10:25.207 "trtype": "tcp", 00:10:25.207 "method": "nvmf_get_transports", 00:10:25.207 "req_id": 1 00:10:25.207 } 00:10:25.207 Got JSON-RPC error response 00:10:25.207 response: 00:10:25.207 { 00:10:25.207 "code": -19, 00:10:25.207 "message": "No such device" 00:10:25.207 } 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:25.207 [2024-11-26 18:07:02.474107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.207 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:25.207 { 00:10:25.207 "subsystems": [ 00:10:25.207 { 00:10:25.207 "subsystem": "scheduler", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "framework_set_scheduler", 00:10:25.207 "params": { 00:10:25.207 "name": "static" 00:10:25.207 } 00:10:25.207 } 00:10:25.207 ] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "vmd", 00:10:25.207 "config": [] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "sock", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "sock_set_default_impl", 00:10:25.207 "params": { 00:10:25.207 "impl_name": "posix" 00:10:25.207 } 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "method": "sock_impl_set_options", 00:10:25.207 "params": { 00:10:25.207 "impl_name": "ssl", 00:10:25.207 "recv_buf_size": 4096, 00:10:25.207 "send_buf_size": 4096, 00:10:25.207 "enable_recv_pipe": true, 00:10:25.207 "enable_quickack": false, 00:10:25.207 "enable_placement_id": 0, 00:10:25.207 "enable_zerocopy_send_server": true, 00:10:25.207 "enable_zerocopy_send_client": false, 00:10:25.207 "zerocopy_threshold": 0, 00:10:25.207 "tls_version": 0, 00:10:25.207 "enable_ktls": false 00:10:25.207 } 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "method": "sock_impl_set_options", 00:10:25.207 "params": { 00:10:25.207 "impl_name": "posix", 00:10:25.207 "recv_buf_size": 2097152, 00:10:25.207 "send_buf_size": 2097152, 00:10:25.207 "enable_recv_pipe": true, 00:10:25.207 "enable_quickack": false, 00:10:25.207 "enable_placement_id": 0, 00:10:25.207 "enable_zerocopy_send_server": true, 00:10:25.207 "enable_zerocopy_send_client": false, 00:10:25.207 "zerocopy_threshold": 0, 00:10:25.207 "tls_version": 0, 00:10:25.207 "enable_ktls": false 00:10:25.207 } 00:10:25.207 } 00:10:25.207 ] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "iobuf", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "iobuf_set_options", 00:10:25.207 "params": { 00:10:25.207 "small_pool_count": 8192, 00:10:25.207 "large_pool_count": 1024, 00:10:25.207 "small_bufsize": 8192, 00:10:25.207 "large_bufsize": 135168, 00:10:25.207 "enable_numa": false 00:10:25.207 } 00:10:25.207 } 00:10:25.207 ] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "keyring", 00:10:25.207 "config": [] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "vfio_user_target", 00:10:25.207 "config": null 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "fsdev", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "fsdev_set_opts", 00:10:25.207 "params": { 00:10:25.207 "fsdev_io_pool_size": 65535, 00:10:25.207 "fsdev_io_cache_size": 256 00:10:25.207 } 00:10:25.207 } 00:10:25.207 ] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "accel", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "accel_set_options", 00:10:25.207 "params": { 00:10:25.207 "small_cache_size": 128, 00:10:25.207 "large_cache_size": 16, 00:10:25.207 "task_count": 2048, 00:10:25.207 "sequence_count": 2048, 00:10:25.207 "buf_count": 2048 00:10:25.207 } 00:10:25.207 } 00:10:25.207 ] 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "subsystem": "bdev", 00:10:25.207 "config": [ 00:10:25.207 { 00:10:25.207 "method": "bdev_set_options", 00:10:25.207 "params": { 00:10:25.207 "bdev_io_pool_size": 65535, 00:10:25.207 "bdev_io_cache_size": 256, 00:10:25.207 "bdev_auto_examine": true, 00:10:25.207 "iobuf_small_cache_size": 128, 00:10:25.207 "iobuf_large_cache_size": 16 00:10:25.207 } 00:10:25.207 }, 00:10:25.207 { 00:10:25.207 "method": "bdev_raid_set_options", 00:10:25.207 "params": { 00:10:25.208 "process_window_size_kb": 1024, 00:10:25.208 "process_max_bandwidth_mb_sec": 0 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "bdev_nvme_set_options", 00:10:25.208 "params": { 00:10:25.208 "action_on_timeout": "none", 00:10:25.208 "timeout_us": 0, 00:10:25.208 "timeout_admin_us": 0, 00:10:25.208 "keep_alive_timeout_ms": 10000, 00:10:25.208 "arbitration_burst": 0, 00:10:25.208 "low_priority_weight": 0, 00:10:25.208 "medium_priority_weight": 0, 00:10:25.208 "high_priority_weight": 0, 00:10:25.208 "nvme_adminq_poll_period_us": 10000, 00:10:25.208 "nvme_ioq_poll_period_us": 0, 00:10:25.208 "io_queue_requests": 0, 00:10:25.208 "delay_cmd_submit": true, 00:10:25.208 "transport_retry_count": 4, 00:10:25.208 "bdev_retry_count": 3, 00:10:25.208 "transport_ack_timeout": 0, 00:10:25.208 "ctrlr_loss_timeout_sec": 0, 00:10:25.208 "reconnect_delay_sec": 0, 00:10:25.208 "fast_io_fail_timeout_sec": 0, 00:10:25.208 "disable_auto_failback": false, 00:10:25.208 "generate_uuids": false, 00:10:25.208 "transport_tos": 0, 00:10:25.208 "nvme_error_stat": false, 00:10:25.208 "rdma_srq_size": 0, 00:10:25.208 "io_path_stat": false, 00:10:25.208 "allow_accel_sequence": false, 00:10:25.208 "rdma_max_cq_size": 0, 00:10:25.208 "rdma_cm_event_timeout_ms": 0, 00:10:25.208 "dhchap_digests": [ 00:10:25.208 "sha256", 00:10:25.208 "sha384", 00:10:25.208 "sha512" 00:10:25.208 ], 00:10:25.208 "dhchap_dhgroups": [ 00:10:25.208 "null", 00:10:25.208 "ffdhe2048", 00:10:25.208 "ffdhe3072", 00:10:25.208 "ffdhe4096", 00:10:25.208 "ffdhe6144", 00:10:25.208 "ffdhe8192" 00:10:25.208 ] 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "bdev_nvme_set_hotplug", 00:10:25.208 "params": { 00:10:25.208 "period_us": 100000, 00:10:25.208 "enable": false 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "bdev_iscsi_set_options", 00:10:25.208 "params": { 00:10:25.208 "timeout_sec": 30 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "bdev_wait_for_examine" 00:10:25.208 } 00:10:25.208 ] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "nvmf", 00:10:25.208 "config": [ 00:10:25.208 { 00:10:25.208 "method": "nvmf_set_config", 00:10:25.208 "params": { 00:10:25.208 "discovery_filter": "match_any", 00:10:25.208 "admin_cmd_passthru": { 00:10:25.208 "identify_ctrlr": false 00:10:25.208 }, 00:10:25.208 "dhchap_digests": [ 00:10:25.208 "sha256", 00:10:25.208 "sha384", 00:10:25.208 "sha512" 00:10:25.208 ], 00:10:25.208 "dhchap_dhgroups": [ 00:10:25.208 "null", 00:10:25.208 "ffdhe2048", 00:10:25.208 "ffdhe3072", 00:10:25.208 "ffdhe4096", 00:10:25.208 "ffdhe6144", 00:10:25.208 "ffdhe8192" 00:10:25.208 ] 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "nvmf_set_max_subsystems", 00:10:25.208 "params": { 00:10:25.208 "max_subsystems": 1024 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "nvmf_set_crdt", 00:10:25.208 "params": { 00:10:25.208 "crdt1": 0, 00:10:25.208 "crdt2": 0, 00:10:25.208 "crdt3": 0 00:10:25.208 } 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "method": "nvmf_create_transport", 00:10:25.208 "params": { 00:10:25.208 "trtype": "TCP", 00:10:25.208 "max_queue_depth": 128, 00:10:25.208 "max_io_qpairs_per_ctrlr": 127, 00:10:25.208 "in_capsule_data_size": 4096, 00:10:25.208 "max_io_size": 131072, 00:10:25.208 "io_unit_size": 131072, 00:10:25.208 "max_aq_depth": 128, 00:10:25.208 "num_shared_buffers": 511, 00:10:25.208 "buf_cache_size": 4294967295, 00:10:25.208 "dif_insert_or_strip": false, 00:10:25.208 "zcopy": false, 00:10:25.208 "c2h_success": true, 00:10:25.208 "sock_priority": 0, 00:10:25.208 "abort_timeout_sec": 1, 00:10:25.208 "ack_timeout": 0, 00:10:25.208 "data_wr_pool_size": 0 00:10:25.208 } 00:10:25.208 } 00:10:25.208 ] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "nbd", 00:10:25.208 "config": [] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "ublk", 00:10:25.208 "config": [] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "vhost_blk", 00:10:25.208 "config": [] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "scsi", 00:10:25.208 "config": null 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "iscsi", 00:10:25.208 "config": [ 00:10:25.208 { 00:10:25.208 "method": "iscsi_set_options", 00:10:25.208 "params": { 00:10:25.208 "node_base": "iqn.2016-06.io.spdk", 00:10:25.208 "max_sessions": 128, 00:10:25.208 "max_connections_per_session": 2, 00:10:25.208 "max_queue_depth": 64, 00:10:25.208 "default_time2wait": 2, 00:10:25.208 "default_time2retain": 20, 00:10:25.208 "first_burst_length": 8192, 00:10:25.208 "immediate_data": true, 00:10:25.208 "allow_duplicated_isid": false, 00:10:25.208 "error_recovery_level": 0, 00:10:25.208 "nop_timeout": 60, 00:10:25.208 "nop_in_interval": 30, 00:10:25.208 "disable_chap": false, 00:10:25.208 "require_chap": false, 00:10:25.208 "mutual_chap": false, 00:10:25.208 "chap_group": 0, 00:10:25.208 "max_large_datain_per_connection": 64, 00:10:25.208 "max_r2t_per_connection": 4, 00:10:25.208 "pdu_pool_size": 36864, 00:10:25.208 "immediate_data_pool_size": 16384, 00:10:25.208 "data_out_pool_size": 2048 00:10:25.208 } 00:10:25.208 } 00:10:25.208 ] 00:10:25.208 }, 00:10:25.208 { 00:10:25.208 "subsystem": "vhost_scsi", 00:10:25.208 "config": [] 00:10:25.208 } 00:10:25.208 ] 00:10:25.208 } 00:10:25.208 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:25.208 18:07:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3273384 00:10:25.208 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3273384 ']' 00:10:25.208 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3273384 00:10:25.208 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3273384 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3273384' 00:10:25.468 killing process with pid 3273384 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3273384 00:10:25.468 18:07:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3273384 00:10:25.727 18:07:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3273644 00:10:25.727 18:07:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:25.727 18:07:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3273644 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3273644 ']' 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3273644 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3273644 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3273644' 00:10:30.996 killing process with pid 3273644 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3273644 00:10:30.996 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3273644 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:31.255 00:10:31.255 real 0m6.422s 00:10:31.255 user 0m6.087s 00:10:31.255 sys 0m0.678s 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:31.255 ************************************ 00:10:31.255 END TEST skip_rpc_with_json 00:10:31.255 ************************************ 00:10:31.255 18:07:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.255 ************************************ 00:10:31.255 START TEST skip_rpc_with_delay 00:10:31.255 ************************************ 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:31.255 [2024-11-26 18:07:08.566737] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:31.255 00:10:31.255 real 0m0.042s 00:10:31.255 user 0m0.025s 00:10:31.255 sys 0m0.017s 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.255 18:07:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:31.255 ************************************ 00:10:31.255 END TEST skip_rpc_with_delay 00:10:31.255 ************************************ 00:10:31.255 18:07:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:31.255 18:07:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:31.255 18:07:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.255 18:07:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.255 ************************************ 00:10:31.255 START TEST exit_on_failed_rpc_init 00:10:31.255 ************************************ 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3274718 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3274718 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3274718 ']' 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.255 18:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:31.255 [2024-11-26 18:07:08.674140] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:31.255 [2024-11-26 18:07:08.674188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274718 ] 00:10:31.514 [2024-11-26 18:07:08.748341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.514 [2024-11-26 18:07:08.798675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:31.773 [2024-11-26 18:07:09.051711] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:31.773 [2024-11-26 18:07:09.051775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274731 ] 00:10:31.773 [2024-11-26 18:07:09.105643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.773 [2024-11-26 18:07:09.147469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.773 [2024-11-26 18:07:09.147551] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:31.773 [2024-11-26 18:07:09.147561] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:31.773 [2024-11-26 18:07:09.147566] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3274718 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3274718 ']' 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3274718 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.773 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3274718 00:10:32.032 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.032 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.032 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3274718' 00:10:32.032 killing process with pid 3274718 00:10:32.032 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3274718 00:10:32.032 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3274718 00:10:32.290 00:10:32.290 real 0m0.942s 00:10:32.290 user 0m1.017s 00:10:32.290 sys 0m0.377s 00:10:32.290 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.290 18:07:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:32.290 ************************************ 00:10:32.290 END TEST exit_on_failed_rpc_init 00:10:32.290 ************************************ 00:10:32.290 18:07:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:32.290 00:10:32.290 real 0m13.285s 00:10:32.290 user 0m12.485s 00:10:32.290 sys 0m1.652s 00:10:32.290 18:07:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.290 18:07:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.290 ************************************ 00:10:32.290 END TEST skip_rpc 00:10:32.290 ************************************ 00:10:32.290 18:07:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:32.290 18:07:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.290 18:07:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.290 18:07:09 -- common/autotest_common.sh@10 -- # set +x 00:10:32.290 ************************************ 00:10:32.290 START TEST rpc_client 00:10:32.290 ************************************ 00:10:32.290 18:07:09 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:32.548 * Looking for test storage... 00:10:32.548 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:10:32.548 18:07:09 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.548 18:07:09 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.548 18:07:09 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.548 18:07:09 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.548 18:07:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:32.548 18:07:09 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.549 --rc genhtml_branch_coverage=1 00:10:32.549 --rc genhtml_function_coverage=1 00:10:32.549 --rc genhtml_legend=1 00:10:32.549 --rc geninfo_all_blocks=1 00:10:32.549 --rc geninfo_unexecuted_blocks=1 00:10:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.549 ' 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.549 --rc genhtml_branch_coverage=1 00:10:32.549 --rc genhtml_function_coverage=1 00:10:32.549 --rc genhtml_legend=1 00:10:32.549 --rc geninfo_all_blocks=1 00:10:32.549 --rc geninfo_unexecuted_blocks=1 00:10:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.549 ' 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.549 --rc genhtml_branch_coverage=1 00:10:32.549 --rc genhtml_function_coverage=1 00:10:32.549 --rc genhtml_legend=1 00:10:32.549 --rc geninfo_all_blocks=1 00:10:32.549 --rc geninfo_unexecuted_blocks=1 00:10:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.549 ' 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.549 --rc genhtml_branch_coverage=1 00:10:32.549 --rc genhtml_function_coverage=1 00:10:32.549 --rc genhtml_legend=1 00:10:32.549 --rc geninfo_all_blocks=1 00:10:32.549 --rc geninfo_unexecuted_blocks=1 00:10:32.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.549 ' 00:10:32.549 18:07:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:32.549 OK 00:10:32.549 18:07:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:32.549 00:10:32.549 real 0m0.204s 00:10:32.549 user 0m0.118s 00:10:32.549 sys 0m0.099s 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.549 18:07:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:32.549 ************************************ 00:10:32.549 END TEST rpc_client 00:10:32.549 ************************************ 00:10:32.549 18:07:09 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:10:32.549 18:07:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.549 18:07:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.549 18:07:09 -- common/autotest_common.sh@10 -- # set +x 00:10:32.549 ************************************ 00:10:32.549 START TEST json_config 00:10:32.549 ************************************ 00:10:32.549 18:07:09 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.854 18:07:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.854 18:07:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.854 18:07:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.854 18:07:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.854 18:07:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.854 18:07:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:32.854 18:07:10 json_config -- scripts/common.sh@345 -- # : 1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.854 18:07:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.854 18:07:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@353 -- # local d=1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.854 18:07:10 json_config -- scripts/common.sh@355 -- # echo 1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.854 18:07:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@353 -- # local d=2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.854 18:07:10 json_config -- scripts/common.sh@355 -- # echo 2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.854 18:07:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.854 18:07:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.854 18:07:10 json_config -- scripts/common.sh@368 -- # return 0 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.854 --rc genhtml_branch_coverage=1 00:10:32.854 --rc genhtml_function_coverage=1 00:10:32.854 --rc genhtml_legend=1 00:10:32.854 --rc geninfo_all_blocks=1 00:10:32.854 --rc geninfo_unexecuted_blocks=1 00:10:32.854 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.854 ' 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.854 --rc genhtml_branch_coverage=1 00:10:32.854 --rc genhtml_function_coverage=1 00:10:32.854 --rc genhtml_legend=1 00:10:32.854 --rc geninfo_all_blocks=1 00:10:32.854 --rc geninfo_unexecuted_blocks=1 00:10:32.854 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.854 ' 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.854 --rc genhtml_branch_coverage=1 00:10:32.854 --rc genhtml_function_coverage=1 00:10:32.854 --rc genhtml_legend=1 00:10:32.854 --rc geninfo_all_blocks=1 00:10:32.854 --rc geninfo_unexecuted_blocks=1 00:10:32.854 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.854 ' 00:10:32.854 18:07:10 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.854 --rc genhtml_branch_coverage=1 00:10:32.854 --rc genhtml_function_coverage=1 00:10:32.854 --rc genhtml_legend=1 00:10:32.854 --rc geninfo_all_blocks=1 00:10:32.854 --rc geninfo_unexecuted_blocks=1 00:10:32.854 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:32.854 ' 00:10:32.854 18:07:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0051316f-76a7-e811-906e-00163566263e 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0051316f-76a7-e811-906e-00163566263e 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:32.854 18:07:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.854 18:07:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.854 18:07:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.854 18:07:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.854 18:07:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.854 18:07:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.854 18:07:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.854 18:07:10 json_config -- paths/export.sh@5 -- # export PATH 00:10:32.854 18:07:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@51 -- # : 0 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.854 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.854 18:07:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.854 18:07:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:32.855 WARNING: No tests are enabled so not running JSON configuration tests 00:10:32.855 18:07:10 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:32.855 00:10:32.855 real 0m0.192s 00:10:32.855 user 0m0.129s 00:10:32.855 sys 0m0.068s 00:10:32.855 18:07:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.855 18:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:32.855 ************************************ 00:10:32.855 END TEST json_config 00:10:32.855 ************************************ 00:10:32.855 18:07:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:32.855 18:07:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.855 18:07:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.855 18:07:10 -- common/autotest_common.sh@10 -- # set +x 00:10:32.855 ************************************ 00:10:32.855 START TEST json_config_extra_key 00:10:32.855 ************************************ 00:10:32.855 18:07:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:32.855 18:07:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.855 18:07:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.855 18:07:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.114 18:07:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.114 --rc genhtml_branch_coverage=1 00:10:33.114 --rc genhtml_function_coverage=1 00:10:33.114 --rc genhtml_legend=1 00:10:33.114 --rc geninfo_all_blocks=1 00:10:33.114 --rc geninfo_unexecuted_blocks=1 00:10:33.114 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:33.114 ' 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.114 --rc genhtml_branch_coverage=1 00:10:33.114 --rc genhtml_function_coverage=1 00:10:33.114 --rc genhtml_legend=1 00:10:33.114 --rc geninfo_all_blocks=1 00:10:33.114 --rc geninfo_unexecuted_blocks=1 00:10:33.114 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:33.114 ' 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.114 --rc genhtml_branch_coverage=1 00:10:33.114 --rc genhtml_function_coverage=1 00:10:33.114 --rc genhtml_legend=1 00:10:33.114 --rc geninfo_all_blocks=1 00:10:33.114 --rc geninfo_unexecuted_blocks=1 00:10:33.114 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:33.114 ' 00:10:33.114 18:07:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.114 --rc genhtml_branch_coverage=1 00:10:33.114 --rc genhtml_function_coverage=1 00:10:33.114 --rc genhtml_legend=1 00:10:33.114 --rc geninfo_all_blocks=1 00:10:33.114 --rc geninfo_unexecuted_blocks=1 00:10:33.114 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:33.114 ' 00:10:33.114 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.114 18:07:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0051316f-76a7-e811-906e-00163566263e 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0051316f-76a7-e811-906e-00163566263e 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:33.115 18:07:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.115 18:07:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.115 18:07:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.115 18:07:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.115 18:07:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.115 18:07:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.115 18:07:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.115 18:07:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:33.115 18:07:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.115 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.115 18:07:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:33.115 INFO: launching applications... 00:10:33.115 18:07:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3275156 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:33.115 Waiting for target to run... 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3275156 /var/tmp/spdk_tgt.sock 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3275156 ']' 00:10:33.115 18:07:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:33.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.115 18:07:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 [2024-11-26 18:07:10.437624] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:33.115 [2024-11-26 18:07:10.437704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275156 ] 00:10:33.374 [2024-11-26 18:07:10.745150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.374 [2024-11-26 18:07:10.782478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.308 18:07:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.308 18:07:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:34.308 00:10:34.308 18:07:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:34.308 INFO: shutting down applications... 00:10:34.308 18:07:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3275156 ]] 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3275156 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3275156 00:10:34.308 18:07:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3275156 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:34.568 18:07:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:34.568 SPDK target shutdown done 00:10:34.568 18:07:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:34.568 Success 00:10:34.568 00:10:34.568 real 0m1.680s 00:10:34.568 user 0m1.583s 00:10:34.568 sys 0m0.418s 00:10:34.568 18:07:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.568 18:07:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:34.568 ************************************ 00:10:34.568 END TEST json_config_extra_key 00:10:34.568 ************************************ 00:10:34.568 18:07:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:34.568 18:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.568 18:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.568 18:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:34.568 ************************************ 00:10:34.568 START TEST alias_rpc 00:10:34.568 ************************************ 00:10:34.568 18:07:11 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:34.827 * Looking for test storage... 00:10:34.827 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:10:34.827 18:07:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.827 18:07:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.827 18:07:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.827 18:07:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.827 18:07:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.828 18:07:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.828 --rc genhtml_branch_coverage=1 00:10:34.828 --rc genhtml_function_coverage=1 00:10:34.828 --rc genhtml_legend=1 00:10:34.828 --rc geninfo_all_blocks=1 00:10:34.828 --rc geninfo_unexecuted_blocks=1 00:10:34.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.828 ' 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.828 --rc genhtml_branch_coverage=1 00:10:34.828 --rc genhtml_function_coverage=1 00:10:34.828 --rc genhtml_legend=1 00:10:34.828 --rc geninfo_all_blocks=1 00:10:34.828 --rc geninfo_unexecuted_blocks=1 00:10:34.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.828 ' 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.828 --rc genhtml_branch_coverage=1 00:10:34.828 --rc genhtml_function_coverage=1 00:10:34.828 --rc genhtml_legend=1 00:10:34.828 --rc geninfo_all_blocks=1 00:10:34.828 --rc geninfo_unexecuted_blocks=1 00:10:34.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.828 ' 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.828 --rc genhtml_branch_coverage=1 00:10:34.828 --rc genhtml_function_coverage=1 00:10:34.828 --rc genhtml_legend=1 00:10:34.828 --rc geninfo_all_blocks=1 00:10:34.828 --rc geninfo_unexecuted_blocks=1 00:10:34.828 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.828 ' 00:10:34.828 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:34.828 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3275471 00:10:34.828 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:34.828 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3275471 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3275471 ']' 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.828 18:07:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.828 [2024-11-26 18:07:12.157489] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:34.828 [2024-11-26 18:07:12.157531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275471 ] 00:10:34.828 [2024-11-26 18:07:12.223371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.086 [2024-11-26 18:07:12.274470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.086 18:07:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.086 18:07:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:35.086 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:35.344 18:07:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3275471 00:10:35.344 18:07:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3275471 ']' 00:10:35.344 18:07:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3275471 00:10:35.344 18:07:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:35.344 18:07:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.344 18:07:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3275471 00:10:35.603 18:07:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.603 18:07:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.603 18:07:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3275471' 00:10:35.603 killing process with pid 3275471 00:10:35.604 18:07:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 3275471 00:10:35.604 18:07:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 3275471 00:10:35.862 00:10:35.862 real 0m1.173s 00:10:35.862 user 0m1.258s 00:10:35.862 sys 0m0.399s 00:10:35.862 18:07:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.862 18:07:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.862 ************************************ 00:10:35.862 END TEST alias_rpc 00:10:35.862 ************************************ 00:10:35.862 18:07:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:35.862 18:07:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:35.862 18:07:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.862 18:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.862 18:07:13 -- common/autotest_common.sh@10 -- # set +x 00:10:35.862 ************************************ 00:10:35.862 START TEST spdkcli_tcp 00:10:35.862 ************************************ 00:10:35.862 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:35.862 * Looking for test storage... 00:10:35.862 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:10:35.862 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.862 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.862 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.122 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.122 18:07:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.123 18:07:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.123 --rc genhtml_branch_coverage=1 00:10:36.123 --rc genhtml_function_coverage=1 00:10:36.123 --rc genhtml_legend=1 00:10:36.123 --rc geninfo_all_blocks=1 00:10:36.123 --rc geninfo_unexecuted_blocks=1 00:10:36.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.123 ' 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.123 --rc genhtml_branch_coverage=1 00:10:36.123 --rc genhtml_function_coverage=1 00:10:36.123 --rc genhtml_legend=1 00:10:36.123 --rc geninfo_all_blocks=1 00:10:36.123 --rc geninfo_unexecuted_blocks=1 00:10:36.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.123 ' 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.123 --rc genhtml_branch_coverage=1 00:10:36.123 --rc genhtml_function_coverage=1 00:10:36.123 --rc genhtml_legend=1 00:10:36.123 --rc geninfo_all_blocks=1 00:10:36.123 --rc geninfo_unexecuted_blocks=1 00:10:36.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.123 ' 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.123 --rc genhtml_branch_coverage=1 00:10:36.123 --rc genhtml_function_coverage=1 00:10:36.123 --rc genhtml_legend=1 00:10:36.123 --rc geninfo_all_blocks=1 00:10:36.123 --rc geninfo_unexecuted_blocks=1 00:10:36.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.123 ' 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3275786 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:36.123 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3275786 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3275786 ']' 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.123 18:07:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.123 [2024-11-26 18:07:13.419665] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:36.123 [2024-11-26 18:07:13.419742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275786 ] 00:10:36.123 [2024-11-26 18:07:13.497434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.123 [2024-11-26 18:07:13.545867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.123 [2024-11-26 18:07:13.545870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.382 18:07:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.382 18:07:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:36.382 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3275844 00:10:36.382 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:36.382 18:07:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:36.641 [ 00:10:36.641 "spdk_get_version", 00:10:36.641 "rpc_get_methods", 00:10:36.641 "notify_get_notifications", 00:10:36.641 "notify_get_types", 00:10:36.641 "trace_get_info", 00:10:36.641 "trace_get_tpoint_group_mask", 00:10:36.641 "trace_disable_tpoint_group", 00:10:36.641 "trace_enable_tpoint_group", 00:10:36.641 "trace_clear_tpoint_mask", 00:10:36.641 "trace_set_tpoint_mask", 00:10:36.641 "fsdev_set_opts", 00:10:36.641 "fsdev_get_opts", 00:10:36.641 "framework_get_pci_devices", 00:10:36.641 "framework_get_config", 00:10:36.641 "framework_get_subsystems", 00:10:36.641 "vfu_tgt_set_base_path", 00:10:36.641 "keyring_get_keys", 00:10:36.641 "iobuf_get_stats", 00:10:36.641 "iobuf_set_options", 00:10:36.641 "sock_get_default_impl", 00:10:36.641 "sock_set_default_impl", 00:10:36.641 "sock_impl_set_options", 00:10:36.641 "sock_impl_get_options", 00:10:36.641 "vmd_rescan", 00:10:36.641 "vmd_remove_device", 00:10:36.641 "vmd_enable", 00:10:36.641 "accel_get_stats", 00:10:36.641 "accel_set_options", 00:10:36.641 "accel_set_driver", 00:10:36.641 "accel_crypto_key_destroy", 00:10:36.641 "accel_crypto_keys_get", 00:10:36.641 "accel_crypto_key_create", 00:10:36.641 "accel_assign_opc", 00:10:36.641 "accel_get_module_info", 00:10:36.641 "accel_get_opc_assignments", 00:10:36.641 "bdev_get_histogram", 00:10:36.641 "bdev_enable_histogram", 00:10:36.641 "bdev_set_qos_limit", 00:10:36.641 "bdev_set_qd_sampling_period", 00:10:36.641 "bdev_get_bdevs", 00:10:36.641 "bdev_reset_iostat", 00:10:36.641 "bdev_get_iostat", 00:10:36.641 "bdev_examine", 00:10:36.641 "bdev_wait_for_examine", 00:10:36.641 "bdev_set_options", 00:10:36.641 "scsi_get_devices", 00:10:36.641 "thread_set_cpumask", 00:10:36.641 "scheduler_set_options", 00:10:36.641 "framework_get_governor", 00:10:36.641 "framework_get_scheduler", 00:10:36.641 "framework_set_scheduler", 00:10:36.641 "framework_get_reactors", 00:10:36.641 "thread_get_io_channels", 00:10:36.641 "thread_get_pollers", 00:10:36.641 "thread_get_stats", 00:10:36.641 "framework_monitor_context_switch", 00:10:36.641 "spdk_kill_instance", 00:10:36.641 "log_enable_timestamps", 00:10:36.641 "log_get_flags", 00:10:36.641 "log_clear_flag", 00:10:36.641 "log_set_flag", 00:10:36.641 "log_get_level", 00:10:36.641 "log_set_level", 00:10:36.641 "log_get_print_level", 00:10:36.641 "log_set_print_level", 00:10:36.641 "framework_enable_cpumask_locks", 00:10:36.641 "framework_disable_cpumask_locks", 00:10:36.641 "framework_wait_init", 00:10:36.641 "framework_start_init", 00:10:36.641 "virtio_blk_create_transport", 00:10:36.641 "virtio_blk_get_transports", 00:10:36.641 "vhost_controller_set_coalescing", 00:10:36.641 "vhost_get_controllers", 00:10:36.641 "vhost_delete_controller", 00:10:36.641 "vhost_create_blk_controller", 00:10:36.641 "vhost_scsi_controller_remove_target", 00:10:36.641 "vhost_scsi_controller_add_target", 00:10:36.641 "vhost_start_scsi_controller", 00:10:36.641 "vhost_create_scsi_controller", 00:10:36.641 "ublk_recover_disk", 00:10:36.641 "ublk_get_disks", 00:10:36.641 "ublk_stop_disk", 00:10:36.641 "ublk_start_disk", 00:10:36.641 "ublk_destroy_target", 00:10:36.641 "ublk_create_target", 00:10:36.641 "nbd_get_disks", 00:10:36.641 "nbd_stop_disk", 00:10:36.641 "nbd_start_disk", 00:10:36.641 "env_dpdk_get_mem_stats", 00:10:36.641 "nvmf_stop_mdns_prr", 00:10:36.641 "nvmf_publish_mdns_prr", 00:10:36.641 "nvmf_subsystem_get_listeners", 00:10:36.641 "nvmf_subsystem_get_qpairs", 00:10:36.641 "nvmf_subsystem_get_controllers", 00:10:36.641 "nvmf_get_stats", 00:10:36.641 "nvmf_get_transports", 00:10:36.641 "nvmf_create_transport", 00:10:36.641 "nvmf_get_targets", 00:10:36.641 "nvmf_delete_target", 00:10:36.641 "nvmf_create_target", 00:10:36.642 "nvmf_subsystem_allow_any_host", 00:10:36.642 "nvmf_subsystem_set_keys", 00:10:36.642 "nvmf_subsystem_remove_host", 00:10:36.642 "nvmf_subsystem_add_host", 00:10:36.642 "nvmf_ns_remove_host", 00:10:36.642 "nvmf_ns_add_host", 00:10:36.642 "nvmf_subsystem_remove_ns", 00:10:36.642 "nvmf_subsystem_set_ns_ana_group", 00:10:36.642 "nvmf_subsystem_add_ns", 00:10:36.642 "nvmf_subsystem_listener_set_ana_state", 00:10:36.642 "nvmf_discovery_get_referrals", 00:10:36.642 "nvmf_discovery_remove_referral", 00:10:36.642 "nvmf_discovery_add_referral", 00:10:36.642 "nvmf_subsystem_remove_listener", 00:10:36.642 "nvmf_subsystem_add_listener", 00:10:36.642 "nvmf_delete_subsystem", 00:10:36.642 "nvmf_create_subsystem", 00:10:36.642 "nvmf_get_subsystems", 00:10:36.642 "nvmf_set_crdt", 00:10:36.642 "nvmf_set_config", 00:10:36.642 "nvmf_set_max_subsystems", 00:10:36.642 "iscsi_get_histogram", 00:10:36.642 "iscsi_enable_histogram", 00:10:36.642 "iscsi_set_options", 00:10:36.642 "iscsi_get_auth_groups", 00:10:36.642 "iscsi_auth_group_remove_secret", 00:10:36.642 "iscsi_auth_group_add_secret", 00:10:36.642 "iscsi_delete_auth_group", 00:10:36.642 "iscsi_create_auth_group", 00:10:36.642 "iscsi_set_discovery_auth", 00:10:36.642 "iscsi_get_options", 00:10:36.642 "iscsi_target_node_request_logout", 00:10:36.642 "iscsi_target_node_set_redirect", 00:10:36.642 "iscsi_target_node_set_auth", 00:10:36.642 "iscsi_target_node_add_lun", 00:10:36.642 "iscsi_get_stats", 00:10:36.642 "iscsi_get_connections", 00:10:36.642 "iscsi_portal_group_set_auth", 00:10:36.642 "iscsi_start_portal_group", 00:10:36.642 "iscsi_delete_portal_group", 00:10:36.642 "iscsi_create_portal_group", 00:10:36.642 "iscsi_get_portal_groups", 00:10:36.642 "iscsi_delete_target_node", 00:10:36.642 "iscsi_target_node_remove_pg_ig_maps", 00:10:36.642 "iscsi_target_node_add_pg_ig_maps", 00:10:36.642 "iscsi_create_target_node", 00:10:36.642 "iscsi_get_target_nodes", 00:10:36.642 "iscsi_delete_initiator_group", 00:10:36.642 "iscsi_initiator_group_remove_initiators", 00:10:36.642 "iscsi_initiator_group_add_initiators", 00:10:36.642 "iscsi_create_initiator_group", 00:10:36.642 "iscsi_get_initiator_groups", 00:10:36.642 "fsdev_aio_delete", 00:10:36.642 "fsdev_aio_create", 00:10:36.642 "keyring_linux_set_options", 00:10:36.642 "keyring_file_remove_key", 00:10:36.642 "keyring_file_add_key", 00:10:36.642 "vfu_virtio_create_fs_endpoint", 00:10:36.642 "vfu_virtio_create_scsi_endpoint", 00:10:36.642 "vfu_virtio_scsi_remove_target", 00:10:36.642 "vfu_virtio_scsi_add_target", 00:10:36.642 "vfu_virtio_create_blk_endpoint", 00:10:36.642 "vfu_virtio_delete_endpoint", 00:10:36.642 "iaa_scan_accel_module", 00:10:36.642 "dsa_scan_accel_module", 00:10:36.642 "ioat_scan_accel_module", 00:10:36.642 "accel_error_inject_error", 00:10:36.642 "bdev_iscsi_delete", 00:10:36.642 "bdev_iscsi_create", 00:10:36.642 "bdev_iscsi_set_options", 00:10:36.642 "bdev_virtio_attach_controller", 00:10:36.642 "bdev_virtio_scsi_get_devices", 00:10:36.642 "bdev_virtio_detach_controller", 00:10:36.642 "bdev_virtio_blk_set_hotplug", 00:10:36.642 "bdev_ftl_set_property", 00:10:36.642 "bdev_ftl_get_properties", 00:10:36.642 "bdev_ftl_get_stats", 00:10:36.642 "bdev_ftl_unmap", 00:10:36.642 "bdev_ftl_unload", 00:10:36.642 "bdev_ftl_delete", 00:10:36.642 "bdev_ftl_load", 00:10:36.642 "bdev_ftl_create", 00:10:36.642 "bdev_aio_delete", 00:10:36.642 "bdev_aio_rescan", 00:10:36.642 "bdev_aio_create", 00:10:36.642 "blobfs_create", 00:10:36.642 "blobfs_detect", 00:10:36.642 "blobfs_set_cache_size", 00:10:36.642 "bdev_zone_block_delete", 00:10:36.642 "bdev_zone_block_create", 00:10:36.642 "bdev_delay_delete", 00:10:36.642 "bdev_delay_create", 00:10:36.642 "bdev_delay_update_latency", 00:10:36.642 "bdev_split_delete", 00:10:36.642 "bdev_split_create", 00:10:36.642 "bdev_error_inject_error", 00:10:36.642 "bdev_error_delete", 00:10:36.642 "bdev_error_create", 00:10:36.642 "bdev_raid_set_options", 00:10:36.642 "bdev_raid_remove_base_bdev", 00:10:36.642 "bdev_raid_add_base_bdev", 00:10:36.642 "bdev_raid_delete", 00:10:36.642 "bdev_raid_create", 00:10:36.642 "bdev_raid_get_bdevs", 00:10:36.642 "bdev_lvol_set_parent_bdev", 00:10:36.642 "bdev_lvol_set_parent", 00:10:36.642 "bdev_lvol_check_shallow_copy", 00:10:36.642 "bdev_lvol_start_shallow_copy", 00:10:36.642 "bdev_lvol_grow_lvstore", 00:10:36.642 "bdev_lvol_get_lvols", 00:10:36.642 "bdev_lvol_get_lvstores", 00:10:36.642 "bdev_lvol_delete", 00:10:36.642 "bdev_lvol_set_read_only", 00:10:36.642 "bdev_lvol_resize", 00:10:36.642 "bdev_lvol_decouple_parent", 00:10:36.642 "bdev_lvol_inflate", 00:10:36.642 "bdev_lvol_rename", 00:10:36.642 "bdev_lvol_clone_bdev", 00:10:36.642 "bdev_lvol_clone", 00:10:36.642 "bdev_lvol_snapshot", 00:10:36.642 "bdev_lvol_create", 00:10:36.642 "bdev_lvol_delete_lvstore", 00:10:36.642 "bdev_lvol_rename_lvstore", 00:10:36.642 "bdev_lvol_create_lvstore", 00:10:36.642 "bdev_passthru_delete", 00:10:36.642 "bdev_passthru_create", 00:10:36.642 "bdev_nvme_cuse_unregister", 00:10:36.642 "bdev_nvme_cuse_register", 00:10:36.642 "bdev_opal_new_user", 00:10:36.642 "bdev_opal_set_lock_state", 00:10:36.642 "bdev_opal_delete", 00:10:36.642 "bdev_opal_get_info", 00:10:36.642 "bdev_opal_create", 00:10:36.642 "bdev_nvme_opal_revert", 00:10:36.642 "bdev_nvme_opal_init", 00:10:36.642 "bdev_nvme_send_cmd", 00:10:36.642 "bdev_nvme_set_keys", 00:10:36.642 "bdev_nvme_get_path_iostat", 00:10:36.642 "bdev_nvme_get_mdns_discovery_info", 00:10:36.642 "bdev_nvme_stop_mdns_discovery", 00:10:36.642 "bdev_nvme_start_mdns_discovery", 00:10:36.642 "bdev_nvme_set_multipath_policy", 00:10:36.642 "bdev_nvme_set_preferred_path", 00:10:36.642 "bdev_nvme_get_io_paths", 00:10:36.642 "bdev_nvme_remove_error_injection", 00:10:36.642 "bdev_nvme_add_error_injection", 00:10:36.642 "bdev_nvme_get_discovery_info", 00:10:36.642 "bdev_nvme_stop_discovery", 00:10:36.642 "bdev_nvme_start_discovery", 00:10:36.642 "bdev_nvme_get_controller_health_info", 00:10:36.642 "bdev_nvme_disable_controller", 00:10:36.642 "bdev_nvme_enable_controller", 00:10:36.642 "bdev_nvme_reset_controller", 00:10:36.642 "bdev_nvme_get_transport_statistics", 00:10:36.642 "bdev_nvme_apply_firmware", 00:10:36.642 "bdev_nvme_detach_controller", 00:10:36.642 "bdev_nvme_get_controllers", 00:10:36.642 "bdev_nvme_attach_controller", 00:10:36.642 "bdev_nvme_set_hotplug", 00:10:36.642 "bdev_nvme_set_options", 00:10:36.642 "bdev_null_resize", 00:10:36.642 "bdev_null_delete", 00:10:36.642 "bdev_null_create", 00:10:36.642 "bdev_malloc_delete", 00:10:36.642 "bdev_malloc_create" 00:10:36.642 ] 00:10:36.642 18:07:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.642 18:07:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.642 18:07:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3275786 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3275786 ']' 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3275786 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.642 18:07:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3275786 00:10:36.901 18:07:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.901 18:07:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.901 18:07:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3275786' 00:10:36.901 killing process with pid 3275786 00:10:36.901 18:07:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3275786 00:10:36.901 18:07:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3275786 00:10:37.161 00:10:37.161 real 0m1.239s 00:10:37.161 user 0m2.178s 00:10:37.161 sys 0m0.476s 00:10:37.161 18:07:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.161 18:07:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 ************************************ 00:10:37.161 END TEST spdkcli_tcp 00:10:37.161 ************************************ 00:10:37.161 18:07:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:37.161 18:07:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:37.161 18:07:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.161 18:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 ************************************ 00:10:37.161 START TEST dpdk_mem_utility 00:10:37.161 ************************************ 00:10:37.161 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:37.161 * Looking for test storage... 00:10:37.420 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.420 18:07:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.420 --rc genhtml_branch_coverage=1 00:10:37.420 --rc genhtml_function_coverage=1 00:10:37.420 --rc genhtml_legend=1 00:10:37.420 --rc geninfo_all_blocks=1 00:10:37.420 --rc geninfo_unexecuted_blocks=1 00:10:37.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:37.420 ' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.420 --rc genhtml_branch_coverage=1 00:10:37.420 --rc genhtml_function_coverage=1 00:10:37.420 --rc genhtml_legend=1 00:10:37.420 --rc geninfo_all_blocks=1 00:10:37.420 --rc geninfo_unexecuted_blocks=1 00:10:37.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:37.420 ' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.420 --rc genhtml_branch_coverage=1 00:10:37.420 --rc genhtml_function_coverage=1 00:10:37.420 --rc genhtml_legend=1 00:10:37.420 --rc geninfo_all_blocks=1 00:10:37.420 --rc geninfo_unexecuted_blocks=1 00:10:37.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:37.420 ' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.420 --rc genhtml_branch_coverage=1 00:10:37.420 --rc genhtml_function_coverage=1 00:10:37.420 --rc genhtml_legend=1 00:10:37.420 --rc geninfo_all_blocks=1 00:10:37.420 --rc geninfo_unexecuted_blocks=1 00:10:37.420 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:37.420 ' 00:10:37.420 18:07:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:37.420 18:07:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3276119 00:10:37.420 18:07:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:37.420 18:07:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3276119 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3276119 ']' 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.420 18:07:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:37.420 [2024-11-26 18:07:14.730013] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:37.420 [2024-11-26 18:07:14.730097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276119 ] 00:10:37.420 [2024-11-26 18:07:14.792052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.420 [2024-11-26 18:07:14.843724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.679 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.679 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:37.679 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:37.679 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:37.679 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.679 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:37.679 { 00:10:37.679 "filename": "/tmp/spdk_mem_dump.txt" 00:10:37.679 } 00:10:37.679 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.679 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:37.679 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:37.679 1 heaps totaling size 818.000000 MiB 00:10:37.679 size: 818.000000 MiB heap id: 0 00:10:37.679 end heaps---------- 00:10:37.679 9 mempools totaling size 603.782043 MiB 00:10:37.679 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:37.679 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:37.679 size: 100.555481 MiB name: bdev_io_3276119 00:10:37.679 size: 50.003479 MiB name: msgpool_3276119 00:10:37.679 size: 36.509338 MiB name: fsdev_io_3276119 00:10:37.679 size: 21.763794 MiB name: PDU_Pool 00:10:37.679 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:37.679 size: 4.133484 MiB name: evtpool_3276119 00:10:37.679 size: 0.026123 MiB name: Session_Pool 00:10:37.679 end mempools------- 00:10:37.679 6 memzones totaling size 4.142822 MiB 00:10:37.679 size: 1.000366 MiB name: RG_ring_0_3276119 00:10:37.679 size: 1.000366 MiB name: RG_ring_1_3276119 00:10:37.679 size: 1.000366 MiB name: RG_ring_4_3276119 00:10:37.679 size: 1.000366 MiB name: RG_ring_5_3276119 00:10:37.679 size: 0.125366 MiB name: RG_ring_2_3276119 00:10:37.679 size: 0.015991 MiB name: RG_ring_3_3276119 00:10:37.679 end memzones------- 00:10:37.679 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:37.938 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:37.938 list of free elements. size: 10.852478 MiB 00:10:37.938 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:37.938 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:37.938 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:37.938 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:37.938 element at address: 0x200008000000 with size: 0.959839 MiB 00:10:37.938 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:37.938 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:37.938 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:37.938 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:37.938 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:37.938 element at address: 0x200003e00000 with size: 0.490723 MiB 00:10:37.938 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:37.938 element at address: 0x200010600000 with size: 0.481934 MiB 00:10:37.938 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:37.938 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:37.938 list of standard malloc elements. size: 199.218628 MiB 00:10:37.938 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:10:37.938 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:10:37.938 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:37.938 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:37.938 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:37.938 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:37.938 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:37.938 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:37.938 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:37.938 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20000085b100 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000008df880 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20001067b600 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:37.938 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:37.938 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:37.938 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:37.938 list of memzone associated elements. size: 607.928894 MiB 00:10:37.938 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:37.938 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:37.938 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:37.938 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:37.938 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:37.938 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3276119_0 00:10:37.938 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:37.938 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3276119_0 00:10:37.938 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:10:37.938 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3276119_0 00:10:37.938 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:37.938 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:37.938 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:37.938 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:37.939 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:37.939 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3276119_0 00:10:37.939 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:37.939 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3276119 00:10:37.939 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:37.939 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3276119 00:10:37.939 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:10:37.939 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:37.939 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:37.939 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:37.939 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:10:37.939 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:37.939 element at address: 0x200003efde40 with size: 1.008118 MiB 00:10:37.939 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:37.939 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:37.939 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3276119 00:10:37.939 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:37.939 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3276119 00:10:37.939 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:37.939 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3276119 00:10:37.939 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:37.939 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3276119 00:10:37.939 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:10:37.939 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3276119 00:10:37.939 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:37.939 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3276119 00:10:37.939 element at address: 0x20001067b780 with size: 0.500488 MiB 00:10:37.939 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:37.939 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:10:37.939 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:37.939 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:37.939 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:37.939 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:37.939 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3276119 00:10:37.939 element at address: 0x2000008df940 with size: 0.125488 MiB 00:10:37.939 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3276119 00:10:37.939 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:10:37.939 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:37.939 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:37.939 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:37.939 element at address: 0x2000008db680 with size: 0.016113 MiB 00:10:37.939 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3276119 00:10:37.939 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:37.939 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:37.939 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:37.939 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3276119 00:10:37.939 element at address: 0x2000008db480 with size: 0.000305 MiB 00:10:37.939 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3276119 00:10:37.939 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:37.939 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3276119 00:10:37.939 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:37.939 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:37.939 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:37.939 18:07:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3276119 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3276119 ']' 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3276119 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276119 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276119' 00:10:37.939 killing process with pid 3276119 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3276119 00:10:37.939 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3276119 00:10:38.197 00:10:38.197 real 0m1.036s 00:10:38.198 user 0m0.939s 00:10:38.198 sys 0m0.411s 00:10:38.198 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.198 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:38.198 ************************************ 00:10:38.198 END TEST dpdk_mem_utility 00:10:38.198 ************************************ 00:10:38.198 18:07:15 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:10:38.198 18:07:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.198 18:07:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.198 18:07:15 -- common/autotest_common.sh@10 -- # set +x 00:10:38.198 ************************************ 00:10:38.198 START TEST event 00:10:38.198 ************************************ 00:10:38.198 18:07:15 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:10:38.456 * Looking for test storage... 00:10:38.456 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.456 18:07:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.456 18:07:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.456 18:07:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.456 18:07:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.456 18:07:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.456 18:07:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.456 18:07:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.456 18:07:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.456 18:07:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.456 18:07:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.456 18:07:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.456 18:07:15 event -- scripts/common.sh@344 -- # case "$op" in 00:10:38.456 18:07:15 event -- scripts/common.sh@345 -- # : 1 00:10:38.456 18:07:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.456 18:07:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.456 18:07:15 event -- scripts/common.sh@365 -- # decimal 1 00:10:38.456 18:07:15 event -- scripts/common.sh@353 -- # local d=1 00:10:38.456 18:07:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.456 18:07:15 event -- scripts/common.sh@355 -- # echo 1 00:10:38.456 18:07:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.456 18:07:15 event -- scripts/common.sh@366 -- # decimal 2 00:10:38.456 18:07:15 event -- scripts/common.sh@353 -- # local d=2 00:10:38.456 18:07:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.456 18:07:15 event -- scripts/common.sh@355 -- # echo 2 00:10:38.456 18:07:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.456 18:07:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.456 18:07:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.456 18:07:15 event -- scripts/common.sh@368 -- # return 0 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.456 --rc genhtml_branch_coverage=1 00:10:38.456 --rc genhtml_function_coverage=1 00:10:38.456 --rc genhtml_legend=1 00:10:38.456 --rc geninfo_all_blocks=1 00:10:38.456 --rc geninfo_unexecuted_blocks=1 00:10:38.456 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:38.456 ' 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.456 --rc genhtml_branch_coverage=1 00:10:38.456 --rc genhtml_function_coverage=1 00:10:38.456 --rc genhtml_legend=1 00:10:38.456 --rc geninfo_all_blocks=1 00:10:38.456 --rc geninfo_unexecuted_blocks=1 00:10:38.456 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:38.456 ' 00:10:38.456 18:07:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.457 --rc genhtml_branch_coverage=1 00:10:38.457 --rc genhtml_function_coverage=1 00:10:38.457 --rc genhtml_legend=1 00:10:38.457 --rc geninfo_all_blocks=1 00:10:38.457 --rc geninfo_unexecuted_blocks=1 00:10:38.457 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:38.457 ' 00:10:38.457 18:07:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.457 --rc genhtml_branch_coverage=1 00:10:38.457 --rc genhtml_function_coverage=1 00:10:38.457 --rc genhtml_legend=1 00:10:38.457 --rc geninfo_all_blocks=1 00:10:38.457 --rc geninfo_unexecuted_blocks=1 00:10:38.457 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:38.457 ' 00:10:38.457 18:07:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:38.457 18:07:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:38.457 18:07:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:38.457 18:07:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:38.457 18:07:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.457 18:07:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.457 ************************************ 00:10:38.457 START TEST event_perf 00:10:38.457 ************************************ 00:10:38.457 18:07:15 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:38.457 Running I/O for 1 seconds...[2024-11-26 18:07:15.842624] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:38.457 [2024-11-26 18:07:15.842694] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276435 ] 00:10:38.716 [2024-11-26 18:07:15.921431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.716 [2024-11-26 18:07:15.972510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.716 [2024-11-26 18:07:15.972602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.716 [2024-11-26 18:07:15.972865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.716 [2024-11-26 18:07:15.972869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.654 Running I/O for 1 seconds... 00:10:39.654 lcore 0: 186590 00:10:39.654 lcore 1: 186586 00:10:39.654 lcore 2: 186588 00:10:39.654 lcore 3: 186589 00:10:39.654 done. 00:10:39.654 00:10:39.654 real 0m1.192s 00:10:39.654 user 0m4.102s 00:10:39.654 sys 0m0.085s 00:10:39.654 18:07:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.654 18:07:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:39.654 ************************************ 00:10:39.654 END TEST event_perf 00:10:39.654 ************************************ 00:10:39.654 18:07:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:39.654 18:07:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:39.654 18:07:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.654 18:07:17 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.654 ************************************ 00:10:39.654 START TEST event_reactor 00:10:39.654 ************************************ 00:10:39.654 18:07:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:39.654 [2024-11-26 18:07:17.099417] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:39.654 [2024-11-26 18:07:17.099485] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276712 ] 00:10:39.914 [2024-11-26 18:07:17.176888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.914 [2024-11-26 18:07:17.224628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.850 test_start 00:10:40.850 oneshot 00:10:40.850 tick 100 00:10:40.850 tick 100 00:10:40.850 tick 250 00:10:40.850 tick 100 00:10:40.850 tick 100 00:10:40.850 tick 100 00:10:40.850 tick 250 00:10:40.850 tick 500 00:10:40.850 tick 100 00:10:40.850 tick 100 00:10:40.850 tick 250 00:10:40.850 tick 100 00:10:40.850 tick 100 00:10:40.850 test_end 00:10:40.850 00:10:40.850 real 0m1.183s 00:10:40.850 user 0m1.097s 00:10:40.850 sys 0m0.082s 00:10:40.850 18:07:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.850 18:07:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:40.850 ************************************ 00:10:40.850 END TEST event_reactor 00:10:40.850 ************************************ 00:10:40.850 18:07:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:40.850 18:07:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.850 18:07:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.850 18:07:18 event -- common/autotest_common.sh@10 -- # set +x 00:10:41.109 ************************************ 00:10:41.109 START TEST event_reactor_perf 00:10:41.109 ************************************ 00:10:41.109 18:07:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:41.109 [2024-11-26 18:07:18.347178] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:41.109 [2024-11-26 18:07:18.347247] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276945 ] 00:10:41.109 [2024-11-26 18:07:18.424111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.109 [2024-11-26 18:07:18.471875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.485 test_start 00:10:42.485 test_end 00:10:42.485 Performance: 697094 events per second 00:10:42.485 00:10:42.485 real 0m1.183s 00:10:42.485 user 0m1.096s 00:10:42.485 sys 0m0.082s 00:10:42.485 18:07:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.485 18:07:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 ************************************ 00:10:42.485 END TEST event_reactor_perf 00:10:42.485 ************************************ 00:10:42.485 18:07:19 event -- event/event.sh@49 -- # uname -s 00:10:42.485 18:07:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:42.485 18:07:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:42.485 18:07:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.485 18:07:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.485 18:07:19 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 ************************************ 00:10:42.485 START TEST event_scheduler 00:10:42.485 ************************************ 00:10:42.485 18:07:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:42.485 * Looking for test storage... 00:10:42.485 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:10:42.485 18:07:19 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.485 18:07:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.485 18:07:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.485 18:07:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.485 18:07:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.485 18:07:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.485 18:07:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.486 18:07:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.486 --rc genhtml_branch_coverage=1 00:10:42.486 --rc genhtml_function_coverage=1 00:10:42.486 --rc genhtml_legend=1 00:10:42.486 --rc geninfo_all_blocks=1 00:10:42.486 --rc geninfo_unexecuted_blocks=1 00:10:42.486 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:42.486 ' 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.486 --rc genhtml_branch_coverage=1 00:10:42.486 --rc genhtml_function_coverage=1 00:10:42.486 --rc genhtml_legend=1 00:10:42.486 --rc geninfo_all_blocks=1 00:10:42.486 --rc geninfo_unexecuted_blocks=1 00:10:42.486 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:42.486 ' 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.486 --rc genhtml_branch_coverage=1 00:10:42.486 --rc genhtml_function_coverage=1 00:10:42.486 --rc genhtml_legend=1 00:10:42.486 --rc geninfo_all_blocks=1 00:10:42.486 --rc geninfo_unexecuted_blocks=1 00:10:42.486 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:42.486 ' 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.486 --rc genhtml_branch_coverage=1 00:10:42.486 --rc genhtml_function_coverage=1 00:10:42.486 --rc genhtml_legend=1 00:10:42.486 --rc geninfo_all_blocks=1 00:10:42.486 --rc geninfo_unexecuted_blocks=1 00:10:42.486 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:42.486 ' 00:10:42.486 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:42.486 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3277233 00:10:42.486 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.486 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:42.486 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3277233 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3277233 ']' 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.486 18:07:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.486 [2024-11-26 18:07:19.769340] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:42.486 [2024-11-26 18:07:19.769426] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277233 ] 00:10:42.486 [2024-11-26 18:07:19.825564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.486 [2024-11-26 18:07:19.876141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.486 [2024-11-26 18:07:19.876240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.486 [2024-11-26 18:07:19.876322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.486 [2024-11-26 18:07:19.876323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.746 18:07:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.746 18:07:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:42.746 18:07:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:42.746 18:07:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 [2024-11-26 18:07:19.997104] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:42.746 [2024-11-26 18:07:19.997122] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:42.746 [2024-11-26 18:07:19.997131] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:42.746 [2024-11-26 18:07:19.997137] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:42.746 [2024-11-26 18:07:19.997142] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 [2024-11-26 18:07:20.084974] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 ************************************ 00:10:42.746 START TEST scheduler_create_thread 00:10:42.746 ************************************ 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 2 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 3 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 4 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 5 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 6 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 7 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 8 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.746 9 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.746 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.006 10 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.006 18:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.385 18:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.385 18:07:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:44.385 18:07:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:44.385 18:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.385 18:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.320 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.320 00:10:45.320 real 0m2.620s 00:10:45.320 user 0m0.021s 00:10:45.320 sys 0m0.006s 00:10:45.320 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.320 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.320 ************************************ 00:10:45.320 END TEST scheduler_create_thread 00:10:45.320 ************************************ 00:10:45.579 18:07:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:45.579 18:07:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3277233 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3277233 ']' 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3277233 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277233 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277233' 00:10:45.579 killing process with pid 3277233 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3277233 00:10:45.579 18:07:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3277233 00:10:45.839 [2024-11-26 18:07:23.219215] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:46.098 00:10:46.098 real 0m3.821s 00:10:46.098 user 0m5.956s 00:10:46.098 sys 0m0.373s 00:10:46.098 18:07:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.098 18:07:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 ************************************ 00:10:46.098 END TEST event_scheduler 00:10:46.098 ************************************ 00:10:46.098 18:07:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:46.098 18:07:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:46.098 18:07:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.098 18:07:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.098 18:07:23 event -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 ************************************ 00:10:46.098 START TEST app_repeat 00:10:46.098 ************************************ 00:10:46.098 18:07:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3277877 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3277877' 00:10:46.098 Process app_repeat pid: 3277877 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:46.098 spdk_app_start Round 0 00:10:46.098 18:07:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3277877 /var/tmp/spdk-nbd.sock 00:10:46.098 18:07:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3277877 ']' 00:10:46.098 18:07:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:46.098 18:07:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.099 18:07:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:46.099 18:07:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.099 18:07:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 [2024-11-26 18:07:23.500135] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:10:46.099 [2024-11-26 18:07:23.500212] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277877 ] 00:10:46.358 [2024-11-26 18:07:23.578523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.358 [2024-11-26 18:07:23.629553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.358 [2024-11-26 18:07:23.629558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.358 18:07:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.358 18:07:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:46.358 18:07:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.617 Malloc0 00:10:46.617 18:07:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.877 Malloc1 00:10:46.877 18:07:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:46.877 18:07:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:47.136 /dev/nbd0 00:10:47.136 18:07:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:47.136 18:07:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.136 1+0 records in 00:10:47.136 1+0 records out 00:10:47.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227489 s, 18.0 MB/s 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:47.136 18:07:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:47.136 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.136 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.136 18:07:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:47.395 /dev/nbd1 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.395 1+0 records in 00:10:47.395 1+0 records out 00:10:47.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219718 s, 18.6 MB/s 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:47.395 18:07:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.395 18:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.654 18:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:47.654 { 00:10:47.654 "nbd_device": "/dev/nbd0", 00:10:47.654 "bdev_name": "Malloc0" 00:10:47.654 }, 00:10:47.654 { 00:10:47.654 "nbd_device": "/dev/nbd1", 00:10:47.654 "bdev_name": "Malloc1" 00:10:47.654 } 00:10:47.654 ]' 00:10:47.654 18:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:47.654 { 00:10:47.654 "nbd_device": "/dev/nbd0", 00:10:47.654 "bdev_name": "Malloc0" 00:10:47.654 }, 00:10:47.654 { 00:10:47.654 "nbd_device": "/dev/nbd1", 00:10:47.654 "bdev_name": "Malloc1" 00:10:47.654 } 00:10:47.654 ]' 00:10:47.654 18:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:47.654 /dev/nbd1' 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:47.654 /dev/nbd1' 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:47.654 256+0 records in 00:10:47.654 256+0 records out 00:10:47.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107334 s, 97.7 MB/s 00:10:47.654 18:07:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.655 18:07:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:47.655 256+0 records in 00:10:47.655 256+0 records out 00:10:47.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174477 s, 60.1 MB/s 00:10:47.655 18:07:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.655 18:07:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:47.914 256+0 records in 00:10:47.914 256+0 records out 00:10:47.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189466 s, 55.3 MB/s 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.914 18:07:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.173 18:07:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.432 18:07:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:48.691 18:07:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:48.692 18:07:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:48.692 18:07:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:48.692 18:07:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:48.692 18:07:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:48.951 18:07:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:48.951 [2024-11-26 18:07:26.372076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.209 [2024-11-26 18:07:26.415999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.209 [2024-11-26 18:07:26.416004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.209 [2024-11-26 18:07:26.458508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:49.209 [2024-11-26 18:07:26.458556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:52.497 spdk_app_start Round 1 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3277877 /var/tmp/spdk-nbd.sock 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3277877 ']' 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.497 18:07:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:52.497 Malloc0 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:52.497 Malloc1 00:10:52.497 18:07:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:52.497 18:07:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.498 18:07:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:52.498 18:07:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:52.498 18:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:52.498 18:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:52.498 18:07:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:52.757 /dev/nbd0 00:10:52.757 18:07:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:53.015 18:07:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.015 1+0 records in 00:10:53.015 1+0 records out 00:10:53.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198855 s, 20.6 MB/s 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.015 18:07:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:53.015 18:07:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.015 18:07:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.015 18:07:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:53.273 /dev/nbd1 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.274 1+0 records in 00:10:53.274 1+0 records out 00:10:53.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162522 s, 25.2 MB/s 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.274 18:07:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.274 18:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:53.533 { 00:10:53.533 "nbd_device": "/dev/nbd0", 00:10:53.533 "bdev_name": "Malloc0" 00:10:53.533 }, 00:10:53.533 { 00:10:53.533 "nbd_device": "/dev/nbd1", 00:10:53.533 "bdev_name": "Malloc1" 00:10:53.533 } 00:10:53.533 ]' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:53.533 { 00:10:53.533 "nbd_device": "/dev/nbd0", 00:10:53.533 "bdev_name": "Malloc0" 00:10:53.533 }, 00:10:53.533 { 00:10:53.533 "nbd_device": "/dev/nbd1", 00:10:53.533 "bdev_name": "Malloc1" 00:10:53.533 } 00:10:53.533 ]' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:53.533 /dev/nbd1' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:53.533 /dev/nbd1' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:53.533 256+0 records in 00:10:53.533 256+0 records out 00:10:53.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108298 s, 96.8 MB/s 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:53.533 256+0 records in 00:10:53.533 256+0 records out 00:10:53.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178536 s, 58.7 MB/s 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:53.533 256+0 records in 00:10:53.533 256+0 records out 00:10:53.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192407 s, 54.5 MB/s 00:10:53.533 18:07:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:53.534 18:07:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:53.793 18:07:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:54.052 18:07:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.053 18:07:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:54.312 18:07:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:54.312 18:07:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:54.597 18:07:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.856 [2024-11-26 18:07:32.147093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.856 [2024-11-26 18:07:32.191543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.856 [2024-11-26 18:07:32.191549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.856 [2024-11-26 18:07:32.241512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.856 [2024-11-26 18:07:32.241558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:58.143 18:07:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:58.143 18:07:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:58.143 spdk_app_start Round 2 00:10:58.143 18:07:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3277877 /var/tmp/spdk-nbd.sock 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3277877 ']' 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:58.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.143 18:07:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.143 18:07:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.143 18:07:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:58.143 18:07:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.143 Malloc0 00:10:58.143 18:07:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.402 Malloc1 00:10:58.402 18:07:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.402 18:07:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:58.661 /dev/nbd0 00:10:58.661 18:07:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:58.661 18:07:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:58.661 1+0 records in 00:10:58.661 1+0 records out 00:10:58.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208223 s, 19.7 MB/s 00:10:58.661 18:07:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:58.661 18:07:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:58.661 18:07:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:58.661 18:07:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.661 18:07:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:58.661 18:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.661 18:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.661 18:07:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:58.920 /dev/nbd1 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:58.920 1+0 records in 00:10:58.920 1+0 records out 00:10:58.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225426 s, 18.2 MB/s 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.920 18:07:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.920 18:07:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:59.179 { 00:10:59.179 "nbd_device": "/dev/nbd0", 00:10:59.179 "bdev_name": "Malloc0" 00:10:59.179 }, 00:10:59.179 { 00:10:59.179 "nbd_device": "/dev/nbd1", 00:10:59.179 "bdev_name": "Malloc1" 00:10:59.179 } 00:10:59.179 ]' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:59.179 { 00:10:59.179 "nbd_device": "/dev/nbd0", 00:10:59.179 "bdev_name": "Malloc0" 00:10:59.179 }, 00:10:59.179 { 00:10:59.179 "nbd_device": "/dev/nbd1", 00:10:59.179 "bdev_name": "Malloc1" 00:10:59.179 } 00:10:59.179 ]' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:59.179 /dev/nbd1' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:59.179 /dev/nbd1' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:59.179 18:07:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:59.179 256+0 records in 00:10:59.180 256+0 records out 00:10:59.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108668 s, 96.5 MB/s 00:10:59.180 18:07:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.180 18:07:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:59.180 256+0 records in 00:10:59.180 256+0 records out 00:10:59.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017046 s, 61.5 MB/s 00:10:59.180 18:07:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.180 18:07:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:59.439 256+0 records in 00:10:59.439 256+0 records out 00:10:59.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019288 s, 54.4 MB/s 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.439 18:07:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.698 18:07:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.699 18:07:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.699 18:07:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.699 18:07:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.699 18:07:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.963 18:07:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:00.222 18:07:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:00.222 18:07:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:00.481 18:07:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:00.481 [2024-11-26 18:07:37.897254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.740 [2024-11-26 18:07:37.940878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.740 [2024-11-26 18:07:37.940883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.740 [2024-11-26 18:07:37.984003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:00.740 [2024-11-26 18:07:37.984052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:03.285 18:07:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3277877 /var/tmp/spdk-nbd.sock 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3277877 ']' 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:03.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:03.543 18:07:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3277877 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3277877 ']' 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3277877 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.543 18:07:40 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277877 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277877' 00:11:03.801 killing process with pid 3277877 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3277877 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3277877 00:11:03.801 spdk_app_start is called in Round 0. 00:11:03.801 Shutdown signal received, stop current app iteration 00:11:03.801 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:11:03.801 spdk_app_start is called in Round 1. 00:11:03.801 Shutdown signal received, stop current app iteration 00:11:03.801 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:11:03.801 spdk_app_start is called in Round 2. 00:11:03.801 Shutdown signal received, stop current app iteration 00:11:03.801 Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 reinitialization... 00:11:03.801 spdk_app_start is called in Round 3. 00:11:03.801 Shutdown signal received, stop current app iteration 00:11:03.801 18:07:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:03.801 18:07:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:03.801 00:11:03.801 real 0m17.714s 00:11:03.801 user 0m39.192s 00:11:03.801 sys 0m3.248s 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.801 18:07:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:03.801 ************************************ 00:11:03.801 END TEST app_repeat 00:11:03.801 ************************************ 00:11:03.801 18:07:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:03.801 18:07:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:03.801 18:07:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.801 18:07:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.801 18:07:41 event -- common/autotest_common.sh@10 -- # set +x 00:11:04.060 ************************************ 00:11:04.060 START TEST cpu_locks 00:11:04.060 ************************************ 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:04.060 * Looking for test storage... 00:11:04.060 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.060 18:07:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.060 --rc genhtml_branch_coverage=1 00:11:04.060 --rc genhtml_function_coverage=1 00:11:04.060 --rc genhtml_legend=1 00:11:04.060 --rc geninfo_all_blocks=1 00:11:04.060 --rc geninfo_unexecuted_blocks=1 00:11:04.060 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:04.060 ' 00:11:04.060 18:07:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.060 --rc genhtml_branch_coverage=1 00:11:04.060 --rc genhtml_function_coverage=1 00:11:04.060 --rc genhtml_legend=1 00:11:04.061 --rc geninfo_all_blocks=1 00:11:04.061 --rc geninfo_unexecuted_blocks=1 00:11:04.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:04.061 ' 00:11:04.061 18:07:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.061 --rc genhtml_branch_coverage=1 00:11:04.061 --rc genhtml_function_coverage=1 00:11:04.061 --rc genhtml_legend=1 00:11:04.061 --rc geninfo_all_blocks=1 00:11:04.061 --rc geninfo_unexecuted_blocks=1 00:11:04.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:04.061 ' 00:11:04.061 18:07:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.061 --rc genhtml_branch_coverage=1 00:11:04.061 --rc genhtml_function_coverage=1 00:11:04.061 --rc genhtml_legend=1 00:11:04.061 --rc geninfo_all_blocks=1 00:11:04.061 --rc geninfo_unexecuted_blocks=1 00:11:04.061 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:04.061 ' 00:11:04.061 18:07:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:04.061 18:07:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:04.061 18:07:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:04.061 18:07:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:04.061 18:07:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.061 18:07:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.061 18:07:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.061 ************************************ 00:11:04.061 START TEST default_locks 00:11:04.061 ************************************ 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3281431 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3281431 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3281431 ']' 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.061 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.061 [2024-11-26 18:07:41.502173] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:04.061 [2024-11-26 18:07:41.502243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281431 ] 00:11:04.352 [2024-11-26 18:07:41.580498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.352 [2024-11-26 18:07:41.627011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.610 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.610 18:07:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:04.610 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3281431 00:11:04.610 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:04.610 18:07:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3281431 00:11:05.176 lslocks: write error 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3281431 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3281431 ']' 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3281431 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3281431 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3281431' 00:11:05.176 killing process with pid 3281431 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3281431 00:11:05.176 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3281431 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3281431 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3281431 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3281431 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3281431 ']' 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3281431) - No such process 00:11:05.434 ERROR: process (pid: 3281431) is no longer running 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:05.434 00:11:05.434 real 0m1.289s 00:11:05.434 user 0m1.267s 00:11:05.434 sys 0m0.609s 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.434 18:07:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 ************************************ 00:11:05.434 END TEST default_locks 00:11:05.434 ************************************ 00:11:05.434 18:07:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:05.434 18:07:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.434 18:07:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.434 18:07:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 ************************************ 00:11:05.434 START TEST default_locks_via_rpc 00:11:05.434 ************************************ 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3281718 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3281718 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3281718 ']' 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.434 18:07:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 [2024-11-26 18:07:42.862704] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:05.434 [2024-11-26 18:07:42.862770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281718 ] 00:11:05.692 [2024-11-26 18:07:42.939711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.692 [2024-11-26 18:07:42.990506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3281718 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3281718 00:11:05.951 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:06.517 18:07:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3281718 00:11:06.517 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3281718 ']' 00:11:06.517 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3281718 00:11:06.517 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3281718 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3281718' 00:11:06.518 killing process with pid 3281718 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3281718 00:11:06.518 18:07:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3281718 00:11:06.776 00:11:06.776 real 0m1.231s 00:11:06.776 user 0m1.228s 00:11:06.776 sys 0m0.550s 00:11:06.776 18:07:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.776 18:07:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.776 ************************************ 00:11:06.776 END TEST default_locks_via_rpc 00:11:06.776 ************************************ 00:11:06.776 18:07:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:06.776 18:07:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.776 18:07:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.776 18:07:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.776 ************************************ 00:11:06.776 START TEST non_locking_app_on_locked_coremask 00:11:06.776 ************************************ 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3282001 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3282001 /var/tmp/spdk.sock 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3282001 ']' 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.776 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.776 [2024-11-26 18:07:44.163351] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:06.776 [2024-11-26 18:07:44.163429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282001 ] 00:11:07.035 [2024-11-26 18:07:44.237573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.035 [2024-11-26 18:07:44.288166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3282010 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3282010 /var/tmp/spdk2.sock 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3282010 ']' 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:07.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.293 18:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.293 [2024-11-26 18:07:44.529468] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:07.294 [2024-11-26 18:07:44.529544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282010 ] 00:11:07.294 [2024-11-26 18:07:44.618979] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:07.294 [2024-11-26 18:07:44.619003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.294 [2024-11-26 18:07:44.714252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.861 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.861 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:07.861 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3282001 00:11:07.861 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3282001 00:11:07.861 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:08.427 lslocks: write error 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3282001 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3282001 ']' 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3282001 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282001 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282001' 00:11:08.427 killing process with pid 3282001 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3282001 00:11:08.427 18:07:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3282001 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3282010 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3282010 ']' 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3282010 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282010 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282010' 00:11:09.362 killing process with pid 3282010 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3282010 00:11:09.362 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3282010 00:11:09.621 00:11:09.621 real 0m2.783s 00:11:09.621 user 0m2.825s 00:11:09.621 sys 0m0.994s 00:11:09.621 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.621 18:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.621 ************************************ 00:11:09.621 END TEST non_locking_app_on_locked_coremask 00:11:09.621 ************************************ 00:11:09.621 18:07:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:09.621 18:07:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:09.621 18:07:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.621 18:07:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.621 ************************************ 00:11:09.621 START TEST locking_app_on_unlocked_coremask 00:11:09.621 ************************************ 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3282528 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3282528 /var/tmp/spdk.sock 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3282528 ']' 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.621 18:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.621 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.621 [2024-11-26 18:07:47.022203] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:09.621 [2024-11-26 18:07:47.022278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282528 ] 00:11:09.880 [2024-11-26 18:07:47.094449] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:09.880 [2024-11-26 18:07:47.094478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.880 [2024-11-26 18:07:47.141350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3282560 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3282560 /var/tmp/spdk2.sock 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3282560 ']' 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.138 18:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.138 [2024-11-26 18:07:47.387856] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:10.138 [2024-11-26 18:07:47.387916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282560 ] 00:11:10.138 [2024-11-26 18:07:47.483059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.138 [2024-11-26 18:07:47.573895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.705 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.705 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:10.705 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3282560 00:11:10.705 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3282560 00:11:10.705 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:11.273 lslocks: write error 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3282528 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3282528 ']' 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3282528 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282528 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282528' 00:11:11.273 killing process with pid 3282528 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3282528 00:11:11.273 18:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3282528 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3282560 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3282560 ']' 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3282560 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282560 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282560' 00:11:12.208 killing process with pid 3282560 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3282560 00:11:12.208 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3282560 00:11:12.466 00:11:12.466 real 0m2.661s 00:11:12.466 user 0m2.686s 00:11:12.466 sys 0m0.985s 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.466 ************************************ 00:11:12.466 END TEST locking_app_on_unlocked_coremask 00:11:12.466 ************************************ 00:11:12.466 18:07:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:12.466 18:07:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.466 18:07:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.466 18:07:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:12.466 ************************************ 00:11:12.466 START TEST locking_app_on_locked_coremask 00:11:12.466 ************************************ 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3283005 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3283005 /var/tmp/spdk.sock 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3283005 ']' 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.466 18:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.466 [2024-11-26 18:07:49.756323] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:12.466 [2024-11-26 18:07:49.756399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283005 ] 00:11:12.466 [2024-11-26 18:07:49.830083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.466 [2024-11-26 18:07:49.882555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3283110 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3283110 /var/tmp/spdk2.sock 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3283110 /var/tmp/spdk2.sock 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3283110 /var/tmp/spdk2.sock 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3283110 ']' 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:12.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.724 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.724 [2024-11-26 18:07:50.160636] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:12.724 [2024-11-26 18:07:50.160705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283110 ] 00:11:12.983 [2024-11-26 18:07:50.258513] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3283005 has claimed it. 00:11:12.983 [2024-11-26 18:07:50.258555] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:13.549 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3283110) - No such process 00:11:13.549 ERROR: process (pid: 3283110) is no longer running 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3283005 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3283005 00:11:13.549 18:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:14.116 lslocks: write error 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3283005 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3283005 ']' 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3283005 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283005 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283005' 00:11:14.116 killing process with pid 3283005 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3283005 00:11:14.116 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3283005 00:11:14.374 00:11:14.374 real 0m1.979s 00:11:14.374 user 0m2.156s 00:11:14.374 sys 0m0.662s 00:11:14.374 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.374 18:07:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.374 ************************************ 00:11:14.374 END TEST locking_app_on_locked_coremask 00:11:14.374 ************************************ 00:11:14.374 18:07:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:14.374 18:07:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.374 18:07:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.374 18:07:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:14.374 ************************************ 00:11:14.374 START TEST locking_overlapped_coremask 00:11:14.374 ************************************ 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3283392 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3283392 /var/tmp/spdk.sock 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3283392 ']' 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.374 18:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.374 [2024-11-26 18:07:51.805560] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:14.374 [2024-11-26 18:07:51.805613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283392 ] 00:11:14.633 [2024-11-26 18:07:51.883032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.633 [2024-11-26 18:07:51.938402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.633 [2024-11-26 18:07:51.938503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.633 [2024-11-26 18:07:51.938503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.892 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3283406 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3283406 /var/tmp/spdk2.sock 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3283406 /var/tmp/spdk2.sock 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3283406 /var/tmp/spdk2.sock 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3283406 ']' 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:14.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.893 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.893 [2024-11-26 18:07:52.197120] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:14.893 [2024-11-26 18:07:52.197199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283406 ] 00:11:14.893 [2024-11-26 18:07:52.271555] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3283392 has claimed it. 00:11:14.893 [2024-11-26 18:07:52.271581] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:15.831 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3283406) - No such process 00:11:15.831 ERROR: process (pid: 3283406) is no longer running 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3283392 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3283392 ']' 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3283392 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283392 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283392' 00:11:15.831 killing process with pid 3283392 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3283392 00:11:15.831 18:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3283392 00:11:16.090 00:11:16.090 real 0m1.545s 00:11:16.090 user 0m4.374s 00:11:16.090 sys 0m0.416s 00:11:16.090 18:07:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.090 18:07:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.090 ************************************ 00:11:16.090 END TEST locking_overlapped_coremask 00:11:16.090 ************************************ 00:11:16.090 18:07:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:16.090 18:07:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.090 18:07:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.091 18:07:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.091 ************************************ 00:11:16.091 START TEST locking_overlapped_coremask_via_rpc 00:11:16.091 ************************************ 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3283694 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3283694 /var/tmp/spdk.sock 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3283694 ']' 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.091 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.091 [2024-11-26 18:07:53.415242] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:16.091 [2024-11-26 18:07:53.415317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283694 ] 00:11:16.091 [2024-11-26 18:07:53.487937] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:16.091 [2024-11-26 18:07:53.487964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.350 [2024-11-26 18:07:53.541032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.350 [2024-11-26 18:07:53.541131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.350 [2024-11-26 18:07:53.541131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3283700 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3283700 /var/tmp/spdk2.sock 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3283700 ']' 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.350 18:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 [2024-11-26 18:07:53.803931] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:16.609 [2024-11-26 18:07:53.804017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283700 ] 00:11:16.609 [2024-11-26 18:07:53.887643] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:16.609 [2024-11-26 18:07:53.887673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.609 [2024-11-26 18:07:53.983497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.609 [2024-11-26 18:07:53.983591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:16.609 [2024-11-26 18:07:53.983592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.177 [2024-11-26 18:07:54.447438] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3283694 has claimed it. 00:11:17.177 request: 00:11:17.177 { 00:11:17.177 "method": "framework_enable_cpumask_locks", 00:11:17.177 "req_id": 1 00:11:17.177 } 00:11:17.177 Got JSON-RPC error response 00:11:17.177 response: 00:11:17.177 { 00:11:17.177 "code": -32603, 00:11:17.177 "message": "Failed to claim CPU core: 2" 00:11:17.177 } 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3283694 /var/tmp/spdk.sock 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3283694 ']' 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.177 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3283700 /var/tmp/spdk2.sock 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3283700 ']' 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:17.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.436 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:17.696 00:11:17.696 real 0m1.575s 00:11:17.696 user 0m0.827s 00:11:17.696 sys 0m0.147s 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.696 18:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.696 ************************************ 00:11:17.696 END TEST locking_overlapped_coremask_via_rpc 00:11:17.696 ************************************ 00:11:17.696 18:07:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:17.696 18:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3283694 ]] 00:11:17.696 18:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3283694 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3283694 ']' 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3283694 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283694 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283694' 00:11:17.696 killing process with pid 3283694 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3283694 00:11:17.696 18:07:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3283694 00:11:18.265 18:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3283700 ]] 00:11:18.265 18:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3283700 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3283700 ']' 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3283700 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283700 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283700' 00:11:18.265 killing process with pid 3283700 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3283700 00:11:18.265 18:07:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3283700 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3283694 ]] 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3283694 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3283694 ']' 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3283694 00:11:18.524 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3283694) - No such process 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3283694 is not found' 00:11:18.524 Process with pid 3283694 is not found 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3283700 ]] 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3283700 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3283700 ']' 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3283700 00:11:18.524 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3283700) - No such process 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3283700 is not found' 00:11:18.524 Process with pid 3283700 is not found 00:11:18.524 18:07:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:18.524 00:11:18.524 real 0m14.574s 00:11:18.524 user 0m24.901s 00:11:18.524 sys 0m5.397s 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.524 18:07:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 ************************************ 00:11:18.524 END TEST cpu_locks 00:11:18.524 ************************************ 00:11:18.524 00:11:18.524 real 0m40.243s 00:11:18.524 user 1m16.601s 00:11:18.524 sys 0m9.613s 00:11:18.524 18:07:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.524 18:07:55 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 ************************************ 00:11:18.524 END TEST event 00:11:18.524 ************************************ 00:11:18.524 18:07:55 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:11:18.524 18:07:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.524 18:07:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.524 18:07:55 -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 ************************************ 00:11:18.524 START TEST thread 00:11:18.524 ************************************ 00:11:18.525 18:07:55 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:11:18.784 * Looking for test storage... 00:11:18.784 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.784 18:07:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.784 18:07:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.784 18:07:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.784 18:07:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.784 18:07:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.784 18:07:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.784 18:07:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.784 18:07:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.784 18:07:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.784 18:07:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.784 18:07:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.784 18:07:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:18.784 18:07:56 thread -- scripts/common.sh@345 -- # : 1 00:11:18.784 18:07:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.784 18:07:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.784 18:07:56 thread -- scripts/common.sh@365 -- # decimal 1 00:11:18.784 18:07:56 thread -- scripts/common.sh@353 -- # local d=1 00:11:18.784 18:07:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.784 18:07:56 thread -- scripts/common.sh@355 -- # echo 1 00:11:18.784 18:07:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.784 18:07:56 thread -- scripts/common.sh@366 -- # decimal 2 00:11:18.784 18:07:56 thread -- scripts/common.sh@353 -- # local d=2 00:11:18.784 18:07:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.784 18:07:56 thread -- scripts/common.sh@355 -- # echo 2 00:11:18.784 18:07:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.784 18:07:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.784 18:07:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.784 18:07:56 thread -- scripts/common.sh@368 -- # return 0 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.784 --rc genhtml_branch_coverage=1 00:11:18.784 --rc genhtml_function_coverage=1 00:11:18.784 --rc genhtml_legend=1 00:11:18.784 --rc geninfo_all_blocks=1 00:11:18.784 --rc geninfo_unexecuted_blocks=1 00:11:18.784 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:18.784 ' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.784 --rc genhtml_branch_coverage=1 00:11:18.784 --rc genhtml_function_coverage=1 00:11:18.784 --rc genhtml_legend=1 00:11:18.784 --rc geninfo_all_blocks=1 00:11:18.784 --rc geninfo_unexecuted_blocks=1 00:11:18.784 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:18.784 ' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.784 --rc genhtml_branch_coverage=1 00:11:18.784 --rc genhtml_function_coverage=1 00:11:18.784 --rc genhtml_legend=1 00:11:18.784 --rc geninfo_all_blocks=1 00:11:18.784 --rc geninfo_unexecuted_blocks=1 00:11:18.784 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:18.784 ' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.784 --rc genhtml_branch_coverage=1 00:11:18.784 --rc genhtml_function_coverage=1 00:11:18.784 --rc genhtml_legend=1 00:11:18.784 --rc geninfo_all_blocks=1 00:11:18.784 --rc geninfo_unexecuted_blocks=1 00:11:18.784 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:18.784 ' 00:11:18.784 18:07:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.784 18:07:56 thread -- common/autotest_common.sh@10 -- # set +x 00:11:18.784 ************************************ 00:11:18.784 START TEST thread_poller_perf 00:11:18.784 ************************************ 00:11:18.784 18:07:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:18.784 [2024-11-26 18:07:56.159190] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:18.784 [2024-11-26 18:07:56.159261] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284309 ] 00:11:19.043 [2024-11-26 18:07:56.235201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.043 [2024-11-26 18:07:56.285034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.043 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:19.980 [2024-11-26T17:07:57.427Z] ====================================== 00:11:19.980 [2024-11-26T17:07:57.427Z] busy:2706236336 (cyc) 00:11:19.980 [2024-11-26T17:07:57.427Z] total_run_count: 616000 00:11:19.980 [2024-11-26T17:07:57.427Z] tsc_hz: 2700000000 (cyc) 00:11:19.980 [2024-11-26T17:07:57.427Z] ====================================== 00:11:19.980 [2024-11-26T17:07:57.427Z] poller_cost: 4393 (cyc), 1627 (nsec) 00:11:19.980 00:11:19.980 real 0m1.187s 00:11:19.980 user 0m1.101s 00:11:19.980 sys 0m0.080s 00:11:19.980 18:07:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.980 18:07:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:19.980 ************************************ 00:11:19.980 END TEST thread_poller_perf 00:11:19.980 ************************************ 00:11:19.980 18:07:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:19.980 18:07:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:19.980 18:07:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.980 18:07:57 thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.980 ************************************ 00:11:19.980 START TEST thread_poller_perf 00:11:19.980 ************************************ 00:11:19.980 18:07:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:19.980 [2024-11-26 18:07:57.414403] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:19.980 [2024-11-26 18:07:57.414471] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284587 ] 00:11:20.239 [2024-11-26 18:07:57.491759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.239 [2024-11-26 18:07:57.538893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.239 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:21.180 [2024-11-26T17:07:58.627Z] ====================================== 00:11:21.180 [2024-11-26T17:07:58.627Z] busy:2701843510 (cyc) 00:11:21.180 [2024-11-26T17:07:58.627Z] total_run_count: 9551000 00:11:21.180 [2024-11-26T17:07:58.627Z] tsc_hz: 2700000000 (cyc) 00:11:21.180 [2024-11-26T17:07:58.627Z] ====================================== 00:11:21.180 [2024-11-26T17:07:58.627Z] poller_cost: 282 (cyc), 104 (nsec) 00:11:21.180 00:11:21.180 real 0m1.183s 00:11:21.180 user 0m1.103s 00:11:21.180 sys 0m0.075s 00:11:21.180 18:07:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.180 18:07:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:21.180 ************************************ 00:11:21.180 END TEST thread_poller_perf 00:11:21.180 ************************************ 00:11:21.180 18:07:58 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:21.180 18:07:58 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:11:21.180 18:07:58 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.180 18:07:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.180 18:07:58 thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.439 ************************************ 00:11:21.439 START TEST thread_spdk_lock 00:11:21.439 ************************************ 00:11:21.439 18:07:58 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:11:21.439 [2024-11-26 18:07:58.666074] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:21.439 [2024-11-26 18:07:58.666164] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284860 ] 00:11:21.439 [2024-11-26 18:07:58.744663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.439 [2024-11-26 18:07:58.792335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.439 [2024-11-26 18:07:58.792339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.007 [2024-11-26 18:07:59.280545] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:22.007 [2024-11-26 18:07:59.280582] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:22.007 [2024-11-26 18:07:59.280592] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbc00 00:11:22.007 [2024-11-26 18:07:59.281417] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:22.007 [2024-11-26 18:07:59.281521] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:22.007 [2024-11-26 18:07:59.281541] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:22.007 Starting test contend 00:11:22.007 Worker Delay Wait us Hold us Total us 00:11:22.007 0 3 157397 184813 342211 00:11:22.007 1 5 82568 283998 366566 00:11:22.007 PASS test contend 00:11:22.007 Starting test hold_by_poller 00:11:22.007 PASS test hold_by_poller 00:11:22.007 Starting test hold_by_message 00:11:22.007 PASS test hold_by_message 00:11:22.007 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:11:22.007 100014 assertions passed 00:11:22.007 0 assertions failed 00:11:22.007 00:11:22.007 real 0m0.673s 00:11:22.007 user 0m1.082s 00:11:22.007 sys 0m0.078s 00:11:22.007 18:07:59 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.007 18:07:59 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:22.007 ************************************ 00:11:22.007 END TEST thread_spdk_lock 00:11:22.007 ************************************ 00:11:22.007 00:11:22.007 real 0m3.430s 00:11:22.007 user 0m3.490s 00:11:22.007 sys 0m0.442s 00:11:22.007 18:07:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.007 18:07:59 thread -- common/autotest_common.sh@10 -- # set +x 00:11:22.007 ************************************ 00:11:22.007 END TEST thread 00:11:22.007 ************************************ 00:11:22.007 18:07:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:22.007 18:07:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:11:22.007 18:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.007 18:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.007 18:07:59 -- common/autotest_common.sh@10 -- # set +x 00:11:22.007 ************************************ 00:11:22.008 START TEST app_cmdline 00:11:22.008 ************************************ 00:11:22.008 18:07:59 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:11:22.267 * Looking for test storage... 00:11:22.267 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.267 18:07:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.267 --rc genhtml_branch_coverage=1 00:11:22.267 --rc genhtml_function_coverage=1 00:11:22.267 --rc genhtml_legend=1 00:11:22.267 --rc geninfo_all_blocks=1 00:11:22.267 --rc geninfo_unexecuted_blocks=1 00:11:22.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:22.267 ' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.267 --rc genhtml_branch_coverage=1 00:11:22.267 --rc genhtml_function_coverage=1 00:11:22.267 --rc genhtml_legend=1 00:11:22.267 --rc geninfo_all_blocks=1 00:11:22.267 --rc geninfo_unexecuted_blocks=1 00:11:22.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:22.267 ' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.267 --rc genhtml_branch_coverage=1 00:11:22.267 --rc genhtml_function_coverage=1 00:11:22.267 --rc genhtml_legend=1 00:11:22.267 --rc geninfo_all_blocks=1 00:11:22.267 --rc geninfo_unexecuted_blocks=1 00:11:22.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:22.267 ' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.267 --rc genhtml_branch_coverage=1 00:11:22.267 --rc genhtml_function_coverage=1 00:11:22.267 --rc genhtml_legend=1 00:11:22.267 --rc geninfo_all_blocks=1 00:11:22.267 --rc geninfo_unexecuted_blocks=1 00:11:22.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:22.267 ' 00:11:22.267 18:07:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:22.267 18:07:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3284942 00:11:22.267 18:07:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3284942 00:11:22.267 18:07:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3284942 ']' 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.267 18:07:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:22.267 [2024-11-26 18:07:59.612889] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:22.267 [2024-11-26 18:07:59.612974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284942 ] 00:11:22.267 [2024-11-26 18:07:59.687574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.534 [2024-11-26 18:07:59.736427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.534 18:07:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.534 18:07:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:22.534 18:07:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:22.818 { 00:11:22.818 "version": "SPDK v25.01-pre git sha1 f7ce15267", 00:11:22.818 "fields": { 00:11:22.818 "major": 25, 00:11:22.818 "minor": 1, 00:11:22.818 "patch": 0, 00:11:22.818 "suffix": "-pre", 00:11:22.818 "commit": "f7ce15267" 00:11:22.818 } 00:11:22.818 } 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:22.818 18:08:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:11:22.818 18:08:00 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:23.076 request: 00:11:23.076 { 00:11:23.076 "method": "env_dpdk_get_mem_stats", 00:11:23.076 "req_id": 1 00:11:23.076 } 00:11:23.076 Got JSON-RPC error response 00:11:23.076 response: 00:11:23.076 { 00:11:23.076 "code": -32601, 00:11:23.076 "message": "Method not found" 00:11:23.076 } 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.076 18:08:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3284942 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3284942 ']' 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3284942 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284942 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284942' 00:11:23.076 killing process with pid 3284942 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 3284942 00:11:23.076 18:08:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 3284942 00:11:23.643 00:11:23.643 real 0m1.440s 00:11:23.643 user 0m1.774s 00:11:23.643 sys 0m0.444s 00:11:23.643 18:08:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.643 18:08:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:23.643 ************************************ 00:11:23.643 END TEST app_cmdline 00:11:23.643 ************************************ 00:11:23.643 18:08:00 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:11:23.643 18:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.643 18:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.643 18:08:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.643 ************************************ 00:11:23.643 START TEST version 00:11:23.643 ************************************ 00:11:23.643 18:08:00 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:11:23.643 * Looking for test storage... 00:11:23.643 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:23.643 18:08:00 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.643 18:08:00 version -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.643 18:08:00 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.643 18:08:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.643 18:08:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.643 18:08:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.643 18:08:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.643 18:08:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.643 18:08:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.643 18:08:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.643 18:08:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.643 18:08:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.643 18:08:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.643 18:08:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.643 18:08:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.643 18:08:01 version -- scripts/common.sh@344 -- # case "$op" in 00:11:23.643 18:08:01 version -- scripts/common.sh@345 -- # : 1 00:11:23.643 18:08:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.643 18:08:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.643 18:08:01 version -- scripts/common.sh@365 -- # decimal 1 00:11:23.643 18:08:01 version -- scripts/common.sh@353 -- # local d=1 00:11:23.643 18:08:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.644 18:08:01 version -- scripts/common.sh@355 -- # echo 1 00:11:23.644 18:08:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.644 18:08:01 version -- scripts/common.sh@366 -- # decimal 2 00:11:23.644 18:08:01 version -- scripts/common.sh@353 -- # local d=2 00:11:23.644 18:08:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.644 18:08:01 version -- scripts/common.sh@355 -- # echo 2 00:11:23.644 18:08:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.644 18:08:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.644 18:08:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.644 18:08:01 version -- scripts/common.sh@368 -- # return 0 00:11:23.644 18:08:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.644 18:08:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.644 --rc genhtml_branch_coverage=1 00:11:23.644 --rc genhtml_function_coverage=1 00:11:23.644 --rc genhtml_legend=1 00:11:23.644 --rc geninfo_all_blocks=1 00:11:23.644 --rc geninfo_unexecuted_blocks=1 00:11:23.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:23.644 ' 00:11:23.644 18:08:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.644 --rc genhtml_branch_coverage=1 00:11:23.644 --rc genhtml_function_coverage=1 00:11:23.644 --rc genhtml_legend=1 00:11:23.644 --rc geninfo_all_blocks=1 00:11:23.644 --rc geninfo_unexecuted_blocks=1 00:11:23.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:23.644 ' 00:11:23.644 18:08:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.644 --rc genhtml_branch_coverage=1 00:11:23.644 --rc genhtml_function_coverage=1 00:11:23.644 --rc genhtml_legend=1 00:11:23.644 --rc geninfo_all_blocks=1 00:11:23.644 --rc geninfo_unexecuted_blocks=1 00:11:23.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:23.644 ' 00:11:23.644 18:08:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.644 --rc genhtml_branch_coverage=1 00:11:23.644 --rc genhtml_function_coverage=1 00:11:23.644 --rc genhtml_legend=1 00:11:23.644 --rc geninfo_all_blocks=1 00:11:23.644 --rc geninfo_unexecuted_blocks=1 00:11:23.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:23.644 ' 00:11:23.644 18:08:01 version -- app/version.sh@17 -- # get_header_version major 00:11:23.644 18:08:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # cut -f2 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.644 18:08:01 version -- app/version.sh@17 -- # major=25 00:11:23.644 18:08:01 version -- app/version.sh@18 -- # get_header_version minor 00:11:23.644 18:08:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # cut -f2 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.644 18:08:01 version -- app/version.sh@18 -- # minor=1 00:11:23.644 18:08:01 version -- app/version.sh@19 -- # get_header_version patch 00:11:23.644 18:08:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # cut -f2 00:11:23.644 18:08:01 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.902 18:08:01 version -- app/version.sh@19 -- # patch=0 00:11:23.902 18:08:01 version -- app/version.sh@20 -- # get_header_version suffix 00:11:23.902 18:08:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:23.902 18:08:01 version -- app/version.sh@14 -- # cut -f2 00:11:23.902 18:08:01 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.902 18:08:01 version -- app/version.sh@20 -- # suffix=-pre 00:11:23.902 18:08:01 version -- app/version.sh@22 -- # version=25.1 00:11:23.902 18:08:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:23.902 18:08:01 version -- app/version.sh@28 -- # version=25.1rc0 00:11:23.902 18:08:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:23.902 18:08:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:23.902 18:08:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:23.902 18:08:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:23.902 00:11:23.902 real 0m0.213s 00:11:23.902 user 0m0.133s 00:11:23.902 sys 0m0.116s 00:11:23.902 18:08:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.902 18:08:01 version -- common/autotest_common.sh@10 -- # set +x 00:11:23.902 ************************************ 00:11:23.902 END TEST version 00:11:23.902 ************************************ 00:11:23.902 18:08:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@194 -- # uname -s 00:11:23.903 18:08:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:23.903 18:08:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.903 18:08:01 -- common/autotest_common.sh@10 -- # set +x 00:11:23.903 18:08:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:11:23.903 18:08:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:11:23.903 18:08:01 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:11:23.903 18:08:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.903 18:08:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.903 18:08:01 -- common/autotest_common.sh@10 -- # set +x 00:11:23.903 ************************************ 00:11:23.903 START TEST llvm_fuzz 00:11:23.903 ************************************ 00:11:23.903 18:08:01 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:11:23.903 * Looking for test storage... 00:11:23.903 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:11:23.903 18:08:01 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.903 18:08:01 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.903 18:08:01 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.160 18:08:01 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.160 18:08:01 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.161 18:08:01 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.161 --rc genhtml_branch_coverage=1 00:11:24.161 --rc genhtml_function_coverage=1 00:11:24.161 --rc genhtml_legend=1 00:11:24.161 --rc geninfo_all_blocks=1 00:11:24.161 --rc geninfo_unexecuted_blocks=1 00:11:24.161 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.161 ' 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.161 --rc genhtml_branch_coverage=1 00:11:24.161 --rc genhtml_function_coverage=1 00:11:24.161 --rc genhtml_legend=1 00:11:24.161 --rc geninfo_all_blocks=1 00:11:24.161 --rc geninfo_unexecuted_blocks=1 00:11:24.161 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.161 ' 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.161 --rc genhtml_branch_coverage=1 00:11:24.161 --rc genhtml_function_coverage=1 00:11:24.161 --rc genhtml_legend=1 00:11:24.161 --rc geninfo_all_blocks=1 00:11:24.161 --rc geninfo_unexecuted_blocks=1 00:11:24.161 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.161 ' 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.161 --rc genhtml_branch_coverage=1 00:11:24.161 --rc genhtml_function_coverage=1 00:11:24.161 --rc genhtml_legend=1 00:11:24.161 --rc geninfo_all_blocks=1 00:11:24.161 --rc geninfo_unexecuted_blocks=1 00:11:24.161 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.161 ' 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:24.161 18:08:01 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.161 18:08:01 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:24.161 ************************************ 00:11:24.161 START TEST nvmf_llvm_fuzz 00:11:24.161 ************************************ 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:11:24.161 * Looking for test storage... 00:11:24.161 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.161 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.423 --rc genhtml_branch_coverage=1 00:11:24.423 --rc genhtml_function_coverage=1 00:11:24.423 --rc genhtml_legend=1 00:11:24.423 --rc geninfo_all_blocks=1 00:11:24.423 --rc geninfo_unexecuted_blocks=1 00:11:24.423 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.423 ' 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.423 --rc genhtml_branch_coverage=1 00:11:24.423 --rc genhtml_function_coverage=1 00:11:24.423 --rc genhtml_legend=1 00:11:24.423 --rc geninfo_all_blocks=1 00:11:24.423 --rc geninfo_unexecuted_blocks=1 00:11:24.423 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.423 ' 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.423 --rc genhtml_branch_coverage=1 00:11:24.423 --rc genhtml_function_coverage=1 00:11:24.423 --rc genhtml_legend=1 00:11:24.423 --rc geninfo_all_blocks=1 00:11:24.423 --rc geninfo_unexecuted_blocks=1 00:11:24.423 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.423 ' 00:11:24.423 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.423 --rc genhtml_branch_coverage=1 00:11:24.423 --rc genhtml_function_coverage=1 00:11:24.423 --rc genhtml_legend=1 00:11:24.423 --rc geninfo_all_blocks=1 00:11:24.423 --rc geninfo_unexecuted_blocks=1 00:11:24.424 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.424 ' 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:24.424 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:24.425 #define SPDK_CONFIG_H 00:11:24.425 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:24.425 #define SPDK_CONFIG_APPS 1 00:11:24.425 #define SPDK_CONFIG_ARCH native 00:11:24.425 #undef SPDK_CONFIG_ASAN 00:11:24.425 #undef SPDK_CONFIG_AVAHI 00:11:24.425 #undef SPDK_CONFIG_CET 00:11:24.425 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:24.425 #define SPDK_CONFIG_COVERAGE 1 00:11:24.425 #define SPDK_CONFIG_CROSS_PREFIX 00:11:24.425 #undef SPDK_CONFIG_CRYPTO 00:11:24.425 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:24.425 #undef SPDK_CONFIG_CUSTOMOCF 00:11:24.425 #undef SPDK_CONFIG_DAOS 00:11:24.425 #define SPDK_CONFIG_DAOS_DIR 00:11:24.425 #define SPDK_CONFIG_DEBUG 1 00:11:24.425 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:24.425 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:11:24.425 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:24.425 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:24.425 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:24.425 #undef SPDK_CONFIG_DPDK_UADK 00:11:24.425 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:11:24.425 #define SPDK_CONFIG_EXAMPLES 1 00:11:24.425 #undef SPDK_CONFIG_FC 00:11:24.425 #define SPDK_CONFIG_FC_PATH 00:11:24.425 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:24.425 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:24.425 #define SPDK_CONFIG_FSDEV 1 00:11:24.425 #undef SPDK_CONFIG_FUSE 00:11:24.425 #define SPDK_CONFIG_FUZZER 1 00:11:24.425 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:11:24.425 #undef SPDK_CONFIG_GOLANG 00:11:24.425 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:24.425 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:24.425 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:24.425 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:24.425 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:24.425 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:24.425 #undef SPDK_CONFIG_HAVE_LZ4 00:11:24.425 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:24.425 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:24.425 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:24.425 #define SPDK_CONFIG_IDXD 1 00:11:24.425 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:24.425 #undef SPDK_CONFIG_IPSEC_MB 00:11:24.425 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:24.425 #define SPDK_CONFIG_ISAL 1 00:11:24.425 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:24.425 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:24.425 #define SPDK_CONFIG_LIBDIR 00:11:24.425 #undef SPDK_CONFIG_LTO 00:11:24.425 #define SPDK_CONFIG_MAX_LCORES 128 00:11:24.425 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:24.425 #define SPDK_CONFIG_NVME_CUSE 1 00:11:24.425 #undef SPDK_CONFIG_OCF 00:11:24.425 #define SPDK_CONFIG_OCF_PATH 00:11:24.425 #define SPDK_CONFIG_OPENSSL_PATH 00:11:24.425 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:24.425 #define SPDK_CONFIG_PGO_DIR 00:11:24.425 #undef SPDK_CONFIG_PGO_USE 00:11:24.425 #define SPDK_CONFIG_PREFIX /usr/local 00:11:24.425 #undef SPDK_CONFIG_RAID5F 00:11:24.425 #undef SPDK_CONFIG_RBD 00:11:24.425 #define SPDK_CONFIG_RDMA 1 00:11:24.425 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:24.425 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:24.425 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:24.425 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:24.425 #undef SPDK_CONFIG_SHARED 00:11:24.425 #undef SPDK_CONFIG_SMA 00:11:24.425 #define SPDK_CONFIG_TESTS 1 00:11:24.425 #undef SPDK_CONFIG_TSAN 00:11:24.425 #define SPDK_CONFIG_UBLK 1 00:11:24.425 #define SPDK_CONFIG_UBSAN 1 00:11:24.425 #undef SPDK_CONFIG_UNIT_TESTS 00:11:24.425 #undef SPDK_CONFIG_URING 00:11:24.425 #define SPDK_CONFIG_URING_PATH 00:11:24.425 #undef SPDK_CONFIG_URING_ZNS 00:11:24.425 #undef SPDK_CONFIG_USDT 00:11:24.425 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:24.425 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:24.425 #define SPDK_CONFIG_VFIO_USER 1 00:11:24.425 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:24.425 #define SPDK_CONFIG_VHOST 1 00:11:24.425 #define SPDK_CONFIG_VIRTIO 1 00:11:24.425 #undef SPDK_CONFIG_VTUNE 00:11:24.425 #define SPDK_CONFIG_VTUNE_DIR 00:11:24.425 #define SPDK_CONFIG_WERROR 1 00:11:24.425 #define SPDK_CONFIG_WPDK_DIR 00:11:24.425 #undef SPDK_CONFIG_XNVME 00:11:24.425 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:24.425 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:24.426 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 3285616 ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 3285616 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.wotsYS 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.wotsYS/tests/nvmf /tmp/spdk.wotsYS 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:24.427 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=1692594176 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3591835648 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=183510654976 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=195957915648 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12447260672 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=97974194176 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=97978957824 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=39185625088 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=39191584768 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5959680 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=97978650624 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=97978957824 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=307200 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=19595776000 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=19595788288 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:24.428 * Looking for test storage... 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=183510654976 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=14661853184 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.428 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.428 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.428 --rc genhtml_branch_coverage=1 00:11:24.428 --rc genhtml_function_coverage=1 00:11:24.428 --rc genhtml_legend=1 00:11:24.428 --rc geninfo_all_blocks=1 00:11:24.428 --rc geninfo_unexecuted_blocks=1 00:11:24.428 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.428 ' 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.429 --rc genhtml_branch_coverage=1 00:11:24.429 --rc genhtml_function_coverage=1 00:11:24.429 --rc genhtml_legend=1 00:11:24.429 --rc geninfo_all_blocks=1 00:11:24.429 --rc geninfo_unexecuted_blocks=1 00:11:24.429 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.429 ' 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.429 --rc genhtml_branch_coverage=1 00:11:24.429 --rc genhtml_function_coverage=1 00:11:24.429 --rc genhtml_legend=1 00:11:24.429 --rc geninfo_all_blocks=1 00:11:24.429 --rc geninfo_unexecuted_blocks=1 00:11:24.429 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.429 ' 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.429 --rc genhtml_branch_coverage=1 00:11:24.429 --rc genhtml_function_coverage=1 00:11:24.429 --rc genhtml_legend=1 00:11:24.429 --rc geninfo_all_blocks=1 00:11:24.429 --rc geninfo_unexecuted_blocks=1 00:11:24.429 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:24.429 ' 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:24.429 18:08:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:11:24.688 [2024-11-26 18:08:01.882079] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:24.688 [2024-11-26 18:08:01.882135] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285672 ] 00:11:24.688 [2024-11-26 18:08:02.097443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.948 [2024-11-26 18:08:02.138057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.948 [2024-11-26 18:08:02.200452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.948 [2024-11-26 18:08:02.216641] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:11:24.948 INFO: Running with entropic power schedule (0xFF, 100). 00:11:24.948 INFO: Seed: 3573359844 00:11:24.948 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:24.948 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:24.948 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:24.948 INFO: A corpus is not provided, starting from an empty corpus 00:11:24.948 #2 INITED exec/s: 0 rss: 64Mb 00:11:24.948 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:24.948 This may also happen if the target rejected all inputs we tried so far 00:11:24.948 [2024-11-26 18:08:02.262388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:24.948 [2024-11-26 18:08:02.262414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:24.948 [2024-11-26 18:08:02.262472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:24.948 [2024-11-26 18:08:02.262483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:24.948 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:11:24.948 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:24.948 #31 NEW cov: 12220 ft: 12193 corp: 2/167b lim: 320 exec/s: 0 rss: 72Mb L: 166/166 MS: 4 ChangeBinInt-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes- 00:11:25.207 [2024-11-26 18:08:02.412743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.207 [2024-11-26 18:08:02.412772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.207 [2024-11-26 18:08:02.412830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.207 [2024-11-26 18:08:02.412841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.207 #42 NEW cov: 12333 ft: 12833 corp: 3/333b lim: 320 exec/s: 0 rss: 72Mb L: 166/166 MS: 1 ChangeBit- 00:11:25.207 [2024-11-26 18:08:02.472773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.207 [2024-11-26 18:08:02.472800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.207 #47 NEW cov: 12339 ft: 13393 corp: 4/418b lim: 320 exec/s: 0 rss: 72Mb L: 85/166 MS: 5 ShuffleBytes-ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:11:25.207 [2024-11-26 18:08:02.512779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.207 [2024-11-26 18:08:02.512801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.207 #48 NEW cov: 12424 ft: 13659 corp: 5/504b lim: 320 exec/s: 0 rss: 72Mb L: 86/166 MS: 1 InsertByte- 00:11:25.207 [2024-11-26 18:08:02.572963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.207 [2024-11-26 18:08:02.572985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.207 #49 NEW cov: 12424 ft: 13786 corp: 6/589b lim: 320 exec/s: 0 rss: 72Mb L: 85/166 MS: 1 ChangeBinInt- 00:11:25.207 [2024-11-26 18:08:02.613232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.207 [2024-11-26 18:08:02.613254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.207 [2024-11-26 18:08:02.613311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.207 [2024-11-26 18:08:02.613322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.207 #50 NEW cov: 12424 ft: 13895 corp: 7/763b lim: 320 exec/s: 0 rss: 72Mb L: 174/174 MS: 1 CMP- DE: "\000\000\000\000\377\377\377\377"- 00:11:25.208 [2024-11-26 18:08:02.653332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.208 [2024-11-26 18:08:02.653355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.208 [2024-11-26 18:08:02.653399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.208 [2024-11-26 18:08:02.653410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.467 #51 NEW cov: 12424 ft: 13926 corp: 8/916b lim: 320 exec/s: 0 rss: 72Mb L: 153/174 MS: 1 EraseBytes- 00:11:25.467 [2024-11-26 18:08:02.713540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.467 [2024-11-26 18:08:02.713562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.467 [2024-11-26 18:08:02.713620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.467 [2024-11-26 18:08:02.713631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.467 #52 NEW cov: 12424 ft: 13999 corp: 9/1083b lim: 320 exec/s: 0 rss: 72Mb L: 167/174 MS: 1 InsertByte- 00:11:25.467 [2024-11-26 18:08:02.773684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.467 [2024-11-26 18:08:02.773707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.467 [2024-11-26 18:08:02.773765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.467 [2024-11-26 18:08:02.773776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.467 #53 NEW cov: 12424 ft: 14000 corp: 10/1250b lim: 320 exec/s: 0 rss: 72Mb L: 167/174 MS: 1 InsertByte- 00:11:25.467 [2024-11-26 18:08:02.813761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:25.467 [2024-11-26 18:08:02.813783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.467 [2024-11-26 18:08:02.813843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4f4f4 00:11:25.467 [2024-11-26 18:08:02.813855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.467 NEW_FUNC[1/2]: 0x153f918 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:11:25.467 NEW_FUNC[2/2]: 0x1975728 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:11:25.467 #54 NEW cov: 12476 ft: 14116 corp: 11/1406b lim: 320 exec/s: 0 rss: 73Mb L: 156/174 MS: 1 InsertRepeatedBytes- 00:11:25.467 [2024-11-26 18:08:02.873907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:25.467 [2024-11-26 18:08:02.873929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.467 [2024-11-26 18:08:02.874004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4f4f4 00:11:25.467 [2024-11-26 18:08:02.874016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.467 #55 NEW cov: 12476 ft: 14181 corp: 12/1562b lim: 320 exec/s: 0 rss: 73Mb L: 156/174 MS: 1 CopyPart- 00:11:25.728 [2024-11-26 18:08:02.934469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.728 [2024-11-26 18:08:02.934491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:02.934548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:f2f2f2f2 cdw11:f2f2f2f2 00:11:25.728 [2024-11-26 18:08:02.934559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:02.934616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f2) qid:0 cid:6 nsid:f2f2f2f2 cdw10:f2f2f2f2 cdw11:f2f2f2f2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf2f2f2f2f2f2f2f2 00:11:25.728 [2024-11-26 18:08:02.934628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:02.934687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f2) qid:0 cid:7 nsid:f2f2f2f2 cdw10:9c9c9c9c cdw11:9c9c9c9c SGL TRANSPORT DATA BLOCK TRANSPORT 0x9c9c9c9c9c9c9c9c 00:11:25.728 [2024-11-26 18:08:02.934698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:25.728 #56 NEW cov: 12476 ft: 14577 corp: 13/1848b lim: 320 exec/s: 0 rss: 73Mb L: 286/286 MS: 1 InsertRepeatedBytes- 00:11:25.728 [2024-11-26 18:08:02.974216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.728 [2024-11-26 18:08:02.974239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:02.974299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:25.728 [2024-11-26 18:08:02.974310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.728 #57 NEW cov: 12476 ft: 14599 corp: 14/2015b lim: 320 exec/s: 0 rss: 73Mb L: 167/286 MS: 1 CopyPart- 00:11:25.728 [2024-11-26 18:08:03.034404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:25.728 [2024-11-26 18:08:03.034426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.034486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xcf4f4f4f4f4f4f4 00:11:25.728 [2024-11-26 18:08:03.034497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.728 #58 NEW cov: 12476 ft: 14604 corp: 15/2171b lim: 320 exec/s: 0 rss: 73Mb L: 156/286 MS: 1 ChangeBinInt- 00:11:25.728 [2024-11-26 18:08:03.094887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:25.728 [2024-11-26 18:08:03.094909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.094965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:f2f2f2f2 cdw11:f2f2f2f2 00:11:25.728 [2024-11-26 18:08:03.094975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.095032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f2) qid:0 cid:6 nsid:f2f2f2f2 cdw10:f2f2f2f2 cdw11:f2f2f2f2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf2f2f2f2f2f2f2f2 00:11:25.728 [2024-11-26 18:08:03.095043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.095100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f2) qid:0 cid:7 nsid:f2f2f2f2 cdw10:9c9c9c9c cdw11:9c9c9c9c SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:25.728 [2024-11-26 18:08:03.095111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:25.728 #59 NEW cov: 12476 ft: 14636 corp: 16/2470b lim: 320 exec/s: 0 rss: 73Mb L: 299/299 MS: 1 InsertRepeatedBytes- 00:11:25.728 [2024-11-26 18:08:03.154944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:25.728 [2024-11-26 18:08:03.154966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.155026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4f4f4 00:11:25.728 [2024-11-26 18:08:03.155038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.155096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4ffff 00:11:25.728 [2024-11-26 18:08:03.155106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:25.728 [2024-11-26 18:08:03.155162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:7 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.728 [2024-11-26 18:08:03.155173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:25.988 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:25.988 #60 NEW cov: 12499 ft: 14723 corp: 17/2729b lim: 320 exec/s: 0 rss: 73Mb L: 259/299 MS: 1 CopyPart- 00:11:25.988 [2024-11-26 18:08:03.194693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.988 [2024-11-26 18:08:03.194718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.988 #61 NEW cov: 12499 ft: 14747 corp: 18/2815b lim: 320 exec/s: 0 rss: 73Mb L: 86/299 MS: 1 InsertByte- 00:11:25.988 [2024-11-26 18:08:03.234862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:25.988 [2024-11-26 18:08:03.234884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.988 #62 NEW cov: 12499 ft: 14785 corp: 19/2901b lim: 320 exec/s: 62 rss: 73Mb L: 86/299 MS: 1 InsertByte- 00:11:25.988 [2024-11-26 18:08:03.295033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:25.988 [2024-11-26 18:08:03.295055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.988 #63 NEW cov: 12499 ft: 14850 corp: 20/2986b lim: 320 exec/s: 63 rss: 73Mb L: 85/299 MS: 1 ChangeByte- 00:11:25.988 [2024-11-26 18:08:03.335132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:25.988 [2024-11-26 18:08:03.335154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.988 #64 NEW cov: 12499 ft: 14864 corp: 21/3072b lim: 320 exec/s: 64 rss: 73Mb L: 86/299 MS: 1 ChangeByte- 00:11:25.988 [2024-11-26 18:08:03.395295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:26f4f4f4 cdw11:f4f4f4f4 00:11:25.988 [2024-11-26 18:08:03.395317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:25.988 #65 NEW cov: 12499 ft: 14883 corp: 22/3158b lim: 320 exec/s: 65 rss: 73Mb L: 86/299 MS: 1 InsertByte- 00:11:26.247 [2024-11-26 18:08:03.435396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.247 [2024-11-26 18:08:03.435418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.247 #66 NEW cov: 12499 ft: 14910 corp: 23/3243b lim: 320 exec/s: 66 rss: 74Mb L: 85/299 MS: 1 ChangeByte- 00:11:26.247 [2024-11-26 18:08:03.495614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:26.247 [2024-11-26 18:08:03.495637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.247 #67 NEW cov: 12499 ft: 14922 corp: 24/3329b lim: 320 exec/s: 67 rss: 74Mb L: 86/299 MS: 1 ChangeBit- 00:11:26.247 [2024-11-26 18:08:03.536051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:26.247 [2024-11-26 18:08:03.536073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.248 [2024-11-26 18:08:03.536130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4f4f4 00:11:26.248 [2024-11-26 18:08:03.536141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.248 [2024-11-26 18:08:03.536196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:6 nsid:f4f4f4f4 cdw10:ffffffff cdw11:ffffffff 00:11:26.248 [2024-11-26 18:08:03.536207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.248 [2024-11-26 18:08:03.536260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:7 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.248 [2024-11-26 18:08:03.536271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:26.248 #68 NEW cov: 12499 ft: 14951 corp: 25/3615b lim: 320 exec/s: 68 rss: 74Mb L: 286/299 MS: 1 CopyPart- 00:11:26.248 [2024-11-26 18:08:03.595869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.248 [2024-11-26 18:08:03.595890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.248 #69 NEW cov: 12499 ft: 14974 corp: 26/3701b lim: 320 exec/s: 69 rss: 74Mb L: 86/299 MS: 1 ChangeByte- 00:11:26.248 [2024-11-26 18:08:03.656085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:26.248 [2024-11-26 18:08:03.656108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.248 #70 NEW cov: 12499 ft: 14983 corp: 27/3787b lim: 320 exec/s: 70 rss: 74Mb L: 86/299 MS: 1 CMP- DE: "\000\000\000\037"- 00:11:26.508 [2024-11-26 18:08:03.696111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.696133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 #71 NEW cov: 12499 ft: 14990 corp: 28/3873b lim: 320 exec/s: 71 rss: 74Mb L: 86/299 MS: 1 ChangeBinInt- 00:11:26.508 [2024-11-26 18:08:03.736627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:26.508 [2024-11-26 18:08:03.736648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 [2024-11-26 18:08:03.736707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xf4f4f4f4f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.736719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.508 [2024-11-26 18:08:03.736773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:26.508 [2024-11-26 18:08:03.736784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.508 [2024-11-26 18:08:03.736839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:7 nsid:f4f4f45d cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.736849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:26.508 #72 NEW cov: 12499 ft: 15015 corp: 29/4173b lim: 320 exec/s: 72 rss: 74Mb L: 300/300 MS: 1 CopyPart- 00:11:26.508 [2024-11-26 18:08:03.776611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:1111f4f4 00:11:26.508 [2024-11-26 18:08:03.776632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 [2024-11-26 18:08:03.776687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE IMAGE DOWNLOAD (11) qid:0 cid:5 nsid:11111111 cdw10:f4f4f4f4 cdw11:f4f4f4f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:26.508 [2024-11-26 18:08:03.776699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.508 NEW_FUNC[1/1]: 0x1976298 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:11:26.508 #73 NEW cov: 12519 ft: 15337 corp: 30/4315b lim: 320 exec/s: 73 rss: 74Mb L: 142/300 MS: 1 InsertRepeatedBytes- 00:11:26.508 [2024-11-26 18:08:03.816479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.816501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 #74 NEW cov: 12519 ft: 15347 corp: 31/4400b lim: 320 exec/s: 74 rss: 74Mb L: 85/300 MS: 1 ChangeBit- 00:11:26.508 [2024-11-26 18:08:03.856639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.856663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 #75 NEW cov: 12519 ft: 15370 corp: 32/4486b lim: 320 exec/s: 75 rss: 74Mb L: 86/300 MS: 1 ShuffleBytes- 00:11:26.508 [2024-11-26 18:08:03.916830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.508 [2024-11-26 18:08:03.916853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.508 #76 NEW cov: 12519 ft: 15378 corp: 33/4580b lim: 320 exec/s: 76 rss: 74Mb L: 94/300 MS: 1 CMP- DE: "@\000\000\000\000\000\000\000"- 00:11:26.768 [2024-11-26 18:08:03.957047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.768 [2024-11-26 18:08:03.957070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:03.957132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:26.768 [2024-11-26 18:08:03.957144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:03.997142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.768 [2024-11-26 18:08:03.997165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:03.997220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:26.768 [2024-11-26 18:08:03.997230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.768 #78 NEW cov: 12519 ft: 15391 corp: 34/4723b lim: 320 exec/s: 78 rss: 74Mb L: 143/300 MS: 2 InsertRepeatedBytes-InsertByte- 00:11:26.768 [2024-11-26 18:08:04.037400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:26.768 [2024-11-26 18:08:04.037423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:04.037479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:26.768 [2024-11-26 18:08:04.037489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:04.037545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:6 nsid:9c9c9c9c cdw10:f4f4f4f4 cdw11:f4f4f4f7 00:11:26.768 [2024-11-26 18:08:04.037571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:26.768 #79 NEW cov: 12519 ft: 15503 corp: 35/4960b lim: 320 exec/s: 79 rss: 74Mb L: 237/300 MS: 1 CrossOver- 00:11:26.768 [2024-11-26 18:08:04.097457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b4) qid:0 cid:4 nsid:b4b4b4b4 cdw10:b4b4b4b4 cdw11:b4b4b4b4 00:11:26.768 [2024-11-26 18:08:04.097480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:04.097536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9c) qid:0 cid:5 nsid:9c9c9c9c cdw10:9c9c9c9c cdw11:9c9c9c9c 00:11:26.768 [2024-11-26 18:08:04.097547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.768 #80 NEW cov: 12519 ft: 15509 corp: 36/5144b lim: 320 exec/s: 80 rss: 74Mb L: 184/300 MS: 1 InsertRepeatedBytes- 00:11:26.768 [2024-11-26 18:08:04.137459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:26.768 [2024-11-26 18:08:04.137482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 #81 NEW cov: 12519 ft: 15520 corp: 37/5230b lim: 320 exec/s: 81 rss: 74Mb L: 86/300 MS: 1 ChangeByte- 00:11:26.768 [2024-11-26 18:08:04.197887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f4 cdw11:f4f4f4f4 00:11:26.768 [2024-11-26 18:08:04.197909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:04.197964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:11:26.768 [2024-11-26 18:08:04.197975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:26.768 [2024-11-26 18:08:04.198031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:26.768 [2024-11-26 18:08:04.198042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:27.029 #82 NEW cov: 12519 ft: 15526 corp: 38/5444b lim: 320 exec/s: 82 rss: 74Mb L: 214/300 MS: 1 InsertRepeatedBytes- 00:11:27.029 [2024-11-26 18:08:04.257781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f4) qid:0 cid:4 nsid:f4f4f4f4 cdw10:f4f4f4f7 cdw11:f4f4f4f4 00:11:27.029 [2024-11-26 18:08:04.257803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.029 #83 NEW cov: 12519 ft: 15554 corp: 39/5534b lim: 320 exec/s: 41 rss: 74Mb L: 90/300 MS: 1 PersAutoDict- DE: "\000\000\000\037"- 00:11:27.029 #83 DONE cov: 12519 ft: 15554 corp: 39/5534b lim: 320 exec/s: 41 rss: 74Mb 00:11:27.029 ###### Recommended dictionary. ###### 00:11:27.029 "\000\000\000\000\377\377\377\377" # Uses: 0 00:11:27.029 "\000\000\000\037" # Uses: 1 00:11:27.029 "@\000\000\000\000\000\000\000" # Uses: 0 00:11:27.029 ###### End of recommended dictionary. ###### 00:11:27.029 Done 83 runs in 2 second(s) 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:27.029 18:08:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:11:27.029 [2024-11-26 18:08:04.466751] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:27.029 [2024-11-26 18:08:04.466830] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286187 ] 00:11:27.289 [2024-11-26 18:08:04.677024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.289 [2024-11-26 18:08:04.716668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.549 [2024-11-26 18:08:04.779026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.549 [2024-11-26 18:08:04.795216] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:11:27.549 INFO: Running with entropic power schedule (0xFF, 100). 00:11:27.549 INFO: Seed: 1856409399 00:11:27.549 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:27.549 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:27.549 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:27.549 INFO: A corpus is not provided, starting from an empty corpus 00:11:27.549 #2 INITED exec/s: 0 rss: 66Mb 00:11:27.549 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:27.549 This may also happen if the target rejected all inputs we tried so far 00:11:27.549 [2024-11-26 18:08:04.840586] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:27.549 [2024-11-26 18:08:04.840827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.549 [2024-11-26 18:08:04.840854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.549 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:11:27.549 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:27.549 #18 NEW cov: 12334 ft: 12321 corp: 2/7b lim: 30 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:11:27.549 [2024-11-26 18:08:04.990953] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:27.549 [2024-11-26 18:08:04.991195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.549 [2024-11-26 18:08:04.991222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.808 NEW_FUNC[1/1]: 0x1a48a08 in nvme_tcp_read_data /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h:405 00:11:27.808 #29 NEW cov: 12448 ft: 12916 corp: 3/14b lim: 30 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertByte- 00:11:27.808 [2024-11-26 18:08:05.051026] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:27.808 [2024-11-26 18:08:05.051264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.808 [2024-11-26 18:08:05.051286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.808 #30 NEW cov: 12454 ft: 13012 corp: 4/21b lim: 30 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 ChangeBinInt- 00:11:27.808 [2024-11-26 18:08:05.111131] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:27.808 [2024-11-26 18:08:05.111371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.808 [2024-11-26 18:08:05.111397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.808 #31 NEW cov: 12539 ft: 13441 corp: 5/28b lim: 30 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 CopyPart- 00:11:27.808 [2024-11-26 18:08:05.151245] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (11776) > len (44) 00:11:27.808 [2024-11-26 18:08:05.151485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.808 [2024-11-26 18:08:05.151512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.808 #32 NEW cov: 12552 ft: 13694 corp: 6/36b lim: 30 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:11:27.808 [2024-11-26 18:08:05.211422] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:27.808 [2024-11-26 18:08:05.211665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:27.808 [2024-11-26 18:08:05.211687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:27.808 #33 NEW cov: 12552 ft: 13748 corp: 7/43b lim: 30 exec/s: 0 rss: 74Mb L: 7/8 MS: 1 CopyPart- 00:11:28.067 [2024-11-26 18:08:05.271809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000002e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.271831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.067 #34 NEW cov: 12562 ft: 13868 corp: 8/49b lim: 30 exec/s: 0 rss: 74Mb L: 6/8 MS: 1 EraseBytes- 00:11:28.067 [2024-11-26 18:08:05.331760] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.067 [2024-11-26 18:08:05.332005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.332027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.067 #35 NEW cov: 12562 ft: 13882 corp: 9/56b lim: 30 exec/s: 0 rss: 74Mb L: 7/8 MS: 1 ChangeByte- 00:11:28.067 [2024-11-26 18:08:05.371851] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.067 [2024-11-26 18:08:05.372091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.372114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.067 #36 NEW cov: 12562 ft: 13991 corp: 10/63b lim: 30 exec/s: 0 rss: 74Mb L: 7/8 MS: 1 ShuffleBytes- 00:11:28.067 [2024-11-26 18:08:05.412024] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:28.067 [2024-11-26 18:08:05.412160] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (11776) > len (44) 00:11:28.067 [2024-11-26 18:08:05.412391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.412415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.067 [2024-11-26 18:08:05.412468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.412483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:28.067 #37 NEW cov: 12562 ft: 14364 corp: 11/76b lim: 30 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:11:28.067 [2024-11-26 18:08:05.452099] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (47108) > buf size (4096) 00:11:28.067 [2024-11-26 18:08:05.452340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2e00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.452362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.067 #38 NEW cov: 12562 ft: 14407 corp: 12/83b lim: 30 exec/s: 0 rss: 74Mb L: 7/13 MS: 1 ShuffleBytes- 00:11:28.067 [2024-11-26 18:08:05.512279] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (11776) > len (4) 00:11:28.067 [2024-11-26 18:08:05.512521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.067 [2024-11-26 18:08:05.512542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.326 #39 NEW cov: 12562 ft: 14415 corp: 13/90b lim: 30 exec/s: 0 rss: 74Mb L: 7/13 MS: 1 ShuffleBytes- 00:11:28.326 [2024-11-26 18:08:05.552387] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (11776) > len (44) 00:11:28.326 [2024-11-26 18:08:05.552635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.552658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.327 #40 NEW cov: 12562 ft: 14460 corp: 14/98b lim: 30 exec/s: 0 rss: 74Mb L: 8/13 MS: 1 ChangeBit- 00:11:28.327 [2024-11-26 18:08:05.592507] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.327 [2024-11-26 18:08:05.592748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.592771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.327 #41 NEW cov: 12562 ft: 14489 corp: 15/104b lim: 30 exec/s: 0 rss: 74Mb L: 6/13 MS: 1 ShuffleBytes- 00:11:28.327 [2024-11-26 18:08:05.632597] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:28.327 [2024-11-26 18:08:05.632834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.632856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.327 #42 NEW cov: 12562 ft: 14552 corp: 16/113b lim: 30 exec/s: 0 rss: 74Mb L: 9/13 MS: 1 CrossOver- 00:11:28.327 [2024-11-26 18:08:05.672764] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.327 [2024-11-26 18:08:05.673009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.673031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.327 #43 NEW cov: 12562 ft: 14592 corp: 17/119b lim: 30 exec/s: 0 rss: 74Mb L: 6/13 MS: 1 ShuffleBytes- 00:11:28.327 [2024-11-26 18:08:05.712858] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:11:28.327 [2024-11-26 18:08:05.713105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000022e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.713127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.327 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:28.327 #44 NEW cov: 12585 ft: 14644 corp: 18/125b lim: 30 exec/s: 0 rss: 74Mb L: 6/13 MS: 1 ChangeBinInt- 00:11:28.327 [2024-11-26 18:08:05.772999] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.327 [2024-11-26 18:08:05.773229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.327 [2024-11-26 18:08:05.773252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.586 #45 NEW cov: 12585 ft: 14649 corp: 19/131b lim: 30 exec/s: 0 rss: 74Mb L: 6/13 MS: 1 ChangeByte- 00:11:28.586 [2024-11-26 18:08:05.833198] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (26628) > buf size (4096) 00:11:28.586 [2024-11-26 18:08:05.833447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.833470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.586 #46 NEW cov: 12585 ft: 14660 corp: 20/137b lim: 30 exec/s: 46 rss: 74Mb L: 6/13 MS: 1 ChangeBit- 00:11:28.586 [2024-11-26 18:08:05.873328] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (11776) > len (44) 00:11:28.586 [2024-11-26 18:08:05.873467] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000002 00:11:28.586 [2024-11-26 18:08:05.873694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.873717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.873771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.873783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:28.586 #47 NEW cov: 12591 ft: 14704 corp: 21/152b lim: 30 exec/s: 47 rss: 74Mb L: 15/15 MS: 1 CopyPart- 00:11:28.586 [2024-11-26 18:08:05.933580] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:11:28.586 [2024-11-26 18:08:05.933721] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:11:28.586 [2024-11-26 18:08:05.933845] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:11:28.586 [2024-11-26 18:08:05.933967] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:11:28.586 [2024-11-26 18:08:05.934196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.934218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.934271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.934283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.934335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8e8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.934346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.934398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8e8e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.934412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:28.586 #48 NEW cov: 12591 ft: 15295 corp: 22/177b lim: 30 exec/s: 48 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:11:28.586 [2024-11-26 18:08:05.993762] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008a8e 00:11:28.586 [2024-11-26 18:08:05.993898] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:11:28.586 [2024-11-26 18:08:05.994020] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:11:28.586 [2024-11-26 18:08:05.994144] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:11:28.586 [2024-11-26 18:08:05.994384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.994408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.994459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.994470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:28.586 [2024-11-26 18:08:05.994522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8e8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.586 [2024-11-26 18:08:05.994534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:28.587 [2024-11-26 18:08:05.994585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8e8e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.587 [2024-11-26 18:08:05.994596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:28.846 #49 NEW cov: 12591 ft: 15319 corp: 23/202b lim: 30 exec/s: 49 rss: 75Mb L: 25/25 MS: 1 ChangeByte- 00:11:28.846 [2024-11-26 18:08:06.053814] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:28.846 [2024-11-26 18:08:06.054061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.846 [2024-11-26 18:08:06.054084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.846 #52 NEW cov: 12591 ft: 15320 corp: 24/211b lim: 30 exec/s: 52 rss: 75Mb L: 9/25 MS: 3 ChangeBit-ShuffleBytes-CMP- DE: "\377\377\377\377\377\377\003\000"- 00:11:28.846 [2024-11-26 18:08:06.093955] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (26628) > buf size (4096) 00:11:28.846 [2024-11-26 18:08:06.094193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.846 [2024-11-26 18:08:06.094215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.846 #53 NEW cov: 12591 ft: 15347 corp: 25/217b lim: 30 exec/s: 53 rss: 75Mb L: 6/25 MS: 1 CopyPart- 00:11:28.846 [2024-11-26 18:08:06.154112] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2e 00:11:28.846 [2024-11-26 18:08:06.154356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2c0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.846 [2024-11-26 18:08:06.154382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.846 #54 NEW cov: 12591 ft: 15358 corp: 26/225b lim: 30 exec/s: 54 rss: 75Mb L: 8/25 MS: 1 InsertByte- 00:11:28.846 [2024-11-26 18:08:06.214284] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (47108) > buf size (4096) 00:11:28.846 [2024-11-26 18:08:06.214531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2e00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.846 [2024-11-26 18:08:06.214558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:28.846 #55 NEW cov: 12591 ft: 15363 corp: 27/232b lim: 30 exec/s: 55 rss: 75Mb L: 7/25 MS: 1 ChangeByte- 00:11:28.846 [2024-11-26 18:08:06.274464] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:28.846 [2024-11-26 18:08:06.274707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.846 [2024-11-26 18:08:06.274730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.314562] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:29.106 [2024-11-26 18:08:06.314806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00002e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.314828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 #57 NEW cov: 12591 ft: 15370 corp: 28/243b lim: 30 exec/s: 57 rss: 75Mb L: 11/25 MS: 2 InsertRepeatedBytes-ShuffleBytes- 00:11:29.106 [2024-11-26 18:08:06.354689] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:29.106 [2024-11-26 18:08:06.354931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.354954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.394950] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:29.106 [2024-11-26 18:08:06.395654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.395676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.395730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.395741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.395790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.395800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.395850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.395862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.395911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.395922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:29.106 #59 NEW cov: 12591 ft: 15484 corp: 29/273b lim: 30 exec/s: 59 rss: 75Mb L: 30/30 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:11:29.106 [2024-11-26 18:08:06.434938] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:29.106 [2024-11-26 18:08:06.435082] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000003 00:11:29.106 [2024-11-26 18:08:06.435316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.435340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 [2024-11-26 18:08:06.435392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.435404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.106 #60 NEW cov: 12591 ft: 15497 corp: 30/287b lim: 30 exec/s: 60 rss: 75Mb L: 14/30 MS: 1 CopyPart- 00:11:29.106 [2024-11-26 18:08:06.495126] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (30976) > len (4) 00:11:29.106 [2024-11-26 18:08:06.495380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000002e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.495403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.106 #61 NEW cov: 12591 ft: 15512 corp: 31/293b lim: 30 exec/s: 61 rss: 75Mb L: 6/30 MS: 1 ChangeByte- 00:11:29.106 [2024-11-26 18:08:06.535249] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:11:29.106 [2024-11-26 18:08:06.535500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.106 [2024-11-26 18:08:06.535520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 #67 NEW cov: 12591 ft: 15576 corp: 32/299b lim: 30 exec/s: 67 rss: 75Mb L: 6/30 MS: 1 EraseBytes- 00:11:29.366 [2024-11-26 18:08:06.595384] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10248) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.595638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.595660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 #68 NEW cov: 12591 ft: 15587 corp: 33/305b lim: 30 exec/s: 68 rss: 75Mb L: 6/30 MS: 1 ChangeByte- 00:11:29.366 [2024-11-26 18:08:06.655519] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.655757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.655780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 #69 NEW cov: 12591 ft: 15599 corp: 34/315b lim: 30 exec/s: 69 rss: 75Mb L: 10/30 MS: 1 InsertByte- 00:11:29.366 [2024-11-26 18:08:06.695656] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.695890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.695911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 #71 NEW cov: 12591 ft: 15605 corp: 35/324b lim: 30 exec/s: 71 rss: 75Mb L: 9/30 MS: 2 InsertByte-CrossOver- 00:11:29.366 [2024-11-26 18:08:06.735763] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (47108) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.736004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2e00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.736026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 #72 NEW cov: 12591 ft: 15609 corp: 36/331b lim: 30 exec/s: 72 rss: 75Mb L: 7/30 MS: 1 ChangeBit- 00:11:29.366 [2024-11-26 18:08:06.795947] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (45100) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.796082] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:11:29.366 [2024-11-26 18:08:06.796304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2c0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.796327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:29.366 [2024-11-26 18:08:06.796386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:29.366 [2024-11-26 18:08:06.796398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:29.625 #73 NEW cov: 12591 ft: 15624 corp: 37/343b lim: 30 exec/s: 36 rss: 75Mb L: 12/30 MS: 1 CrossOver- 00:11:29.625 #73 DONE cov: 12591 ft: 15624 corp: 37/343b lim: 30 exec/s: 36 rss: 75Mb 00:11:29.625 ###### Recommended dictionary. ###### 00:11:29.625 "\377\377\377\377\377\377\003\000" # Uses: 0 00:11:29.625 ###### End of recommended dictionary. ###### 00:11:29.625 Done 73 runs in 2 second(s) 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:29.625 18:08:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:11:29.625 [2024-11-26 18:08:07.005939] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:29.625 [2024-11-26 18:08:07.006000] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286594 ] 00:11:29.885 [2024-11-26 18:08:07.227000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.885 [2024-11-26 18:08:07.266775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.885 [2024-11-26 18:08:07.329199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.144 [2024-11-26 18:08:07.345407] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:11:30.144 INFO: Running with entropic power schedule (0xFF, 100). 00:11:30.144 INFO: Seed: 109429827 00:11:30.144 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:30.144 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:30.144 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:30.144 INFO: A corpus is not provided, starting from an empty corpus 00:11:30.144 #2 INITED exec/s: 0 rss: 65Mb 00:11:30.144 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:30.144 This may also happen if the target rejected all inputs we tried so far 00:11:30.144 [2024-11-26 18:08:07.392946] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.393088] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.393213] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.393448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.393480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.144 [2024-11-26 18:08:07.393532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.393546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.144 [2024-11-26 18:08:07.393594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.393606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.144 NEW_FUNC[1/716]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:11:30.144 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:30.144 #7 NEW cov: 12275 ft: 12274 corp: 2/28b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 5 CopyPart-CMP-ChangeByte-EraseBytes-InsertRepeatedBytes- DE: "\377["- 00:11:30.144 [2024-11-26 18:08:07.543425] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.543590] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.543724] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.144 [2024-11-26 18:08:07.543966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.543996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.144 [2024-11-26 18:08:07.544056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.544070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.144 [2024-11-26 18:08:07.544125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.144 [2024-11-26 18:08:07.544141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.144 #8 NEW cov: 12388 ft: 12790 corp: 3/55b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 PersAutoDict- DE: "\377["- 00:11:30.404 [2024-11-26 18:08:07.603532] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.603692] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.603826] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.604080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.604105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.604161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.604174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.604229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.604242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.404 #9 NEW cov: 12394 ft: 13094 corp: 4/82b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ChangeByte- 00:11:30.404 [2024-11-26 18:08:07.643921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.643944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.643999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.644011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.404 #11 NEW cov: 12489 ft: 13746 corp: 5/101b lim: 35 exec/s: 0 rss: 73Mb L: 19/27 MS: 2 CrossOver-InsertRepeatedBytes- 00:11:30.404 [2024-11-26 18:08:07.683726] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.683877] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.684011] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.684257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.684281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.684335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.684349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.684405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.684418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.404 #12 NEW cov: 12489 ft: 13916 corp: 6/128b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ShuffleBytes- 00:11:30.404 [2024-11-26 18:08:07.744085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f8f009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.744106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 #15 NEW cov: 12489 ft: 14249 corp: 7/140b lim: 35 exec/s: 0 rss: 73Mb L: 12/27 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:11:30.404 [2024-11-26 18:08:07.784204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f1e009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.784226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 #16 NEW cov: 12489 ft: 14364 corp: 8/152b lim: 35 exec/s: 0 rss: 73Mb L: 12/27 MS: 1 ChangeByte- 00:11:30.404 [2024-11-26 18:08:07.844270] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.844435] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.404 [2024-11-26 18:08:07.844810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.844834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.844888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.844900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.404 [2024-11-26 18:08:07.844955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.404 [2024-11-26 18:08:07.844966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.663 #17 NEW cov: 12489 ft: 14478 corp: 9/176b lim: 35 exec/s: 0 rss: 73Mb L: 24/27 MS: 1 CrossOver- 00:11:30.663 [2024-11-26 18:08:07.884656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:07.884678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.663 [2024-11-26 18:08:07.884748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:07.884760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.663 #18 NEW cov: 12489 ft: 14528 corp: 10/196b lim: 35 exec/s: 0 rss: 73Mb L: 20/27 MS: 1 CopyPart- 00:11:30.663 [2024-11-26 18:08:07.944701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f1e009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:07.944723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.663 #19 NEW cov: 12489 ft: 14583 corp: 11/207b lim: 35 exec/s: 0 rss: 74Mb L: 11/27 MS: 1 EraseBytes- 00:11:30.663 [2024-11-26 18:08:08.004723] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.663 [2024-11-26 18:08:08.004871] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.663 [2024-11-26 18:08:08.005007] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.663 [2024-11-26 18:08:08.005249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:08.005274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.663 [2024-11-26 18:08:08.005332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:08.005345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.663 [2024-11-26 18:08:08.005402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.663 [2024-11-26 18:08:08.005431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.663 #20 NEW cov: 12489 ft: 14667 corp: 12/232b lim: 35 exec/s: 0 rss: 74Mb L: 25/27 MS: 1 CrossOver- 00:11:30.663 [2024-11-26 18:08:08.064856] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.663 [2024-11-26 18:08:08.065009] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.663 [2024-11-26 18:08:08.065145] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.664 [2024-11-26 18:08:08.065388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.065412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.664 [2024-11-26 18:08:08.065468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.065480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.664 [2024-11-26 18:08:08.065534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.065546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.664 #21 NEW cov: 12489 ft: 14728 corp: 13/259b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 ShuffleBytes- 00:11:30.664 [2024-11-26 18:08:08.104950] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.664 [2024-11-26 18:08:08.105105] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.664 [2024-11-26 18:08:08.105243] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.664 [2024-11-26 18:08:08.105503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.105529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.664 [2024-11-26 18:08:08.105584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00004200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.105598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.664 [2024-11-26 18:08:08.105655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.664 [2024-11-26 18:08:08.105668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.923 #22 NEW cov: 12489 ft: 14734 corp: 14/285b lim: 35 exec/s: 0 rss: 74Mb L: 26/27 MS: 1 InsertByte- 00:11:30.923 [2024-11-26 18:08:08.165162] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.923 [2024-11-26 18:08:08.165322] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.923 [2024-11-26 18:08:08.165463] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.923 [2024-11-26 18:08:08.165832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.165857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.923 [2024-11-26 18:08:08.165915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.165928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.923 [2024-11-26 18:08:08.165984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.165996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.923 [2024-11-26 18:08:08.166053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.166064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:30.923 #23 NEW cov: 12489 ft: 15245 corp: 15/316b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:11:30.923 [2024-11-26 18:08:08.225589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.225613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.923 [2024-11-26 18:08:08.225669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.225682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.923 #24 NEW cov: 12489 ft: 15263 corp: 16/336b lim: 35 exec/s: 0 rss: 74Mb L: 20/31 MS: 1 CrossOver- 00:11:30.923 [2024-11-26 18:08:08.265576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f8f009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.923 [2024-11-26 18:08:08.265599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.923 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:30.923 #25 NEW cov: 12512 ft: 15292 corp: 17/348b lim: 35 exec/s: 0 rss: 74Mb L: 12/31 MS: 1 CrossOver- 00:11:30.924 [2024-11-26 18:08:08.305817] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.924 [2024-11-26 18:08:08.305966] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:30.924 [2024-11-26 18:08:08.306216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.306238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.924 [2024-11-26 18:08:08.306295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.306307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:30.924 [2024-11-26 18:08:08.306362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.306380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:30.924 [2024-11-26 18:08:08.306434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.306448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:30.924 #26 NEW cov: 12512 ft: 15335 corp: 18/380b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:11:30.924 [2024-11-26 18:08:08.366042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.366066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:30.924 [2024-11-26 18:08:08.366120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.924 [2024-11-26 18:08:08.366132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.183 #27 NEW cov: 12512 ft: 15358 corp: 19/400b lim: 35 exec/s: 27 rss: 74Mb L: 20/32 MS: 1 ChangeByte- 00:11:31.183 [2024-11-26 18:08:08.405820] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.405978] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.406117] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.406257] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.406518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.406554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.183 [2024-11-26 18:08:08.406610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.406622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.183 [2024-11-26 18:08:08.406677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:007e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.406689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.183 [2024-11-26 18:08:08.406744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.406756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.183 #28 NEW cov: 12512 ft: 15377 corp: 20/428b lim: 35 exec/s: 28 rss: 74Mb L: 28/32 MS: 1 InsertByte- 00:11:31.183 [2024-11-26 18:08:08.446409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.446434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.183 [2024-11-26 18:08:08.446505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.446517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.183 [2024-11-26 18:08:08.446576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.446586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.183 #29 NEW cov: 12512 ft: 15418 corp: 21/455b lim: 35 exec/s: 29 rss: 74Mb L: 27/32 MS: 1 InsertRepeatedBytes- 00:11:31.183 [2024-11-26 18:08:08.486164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.183 [2024-11-26 18:08:08.486186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.183 #31 NEW cov: 12512 ft: 15421 corp: 22/468b lim: 35 exec/s: 31 rss: 74Mb L: 13/32 MS: 2 PersAutoDict-CrossOver- DE: "\377["- 00:11:31.183 [2024-11-26 18:08:08.526210] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.526356] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.526502] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.183 [2024-11-26 18:08:08.527004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.527029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.527087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.527101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.527155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.527168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.527222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:4b4b004b cdw11:4b004b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.527233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.527288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:0000004b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.527299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:31.184 #32 NEW cov: 12512 ft: 15460 corp: 23/503b lim: 35 exec/s: 32 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:31.184 [2024-11-26 18:08:08.566711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.566733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.566790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.566801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.566857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.566868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.184 #33 NEW cov: 12512 ft: 15506 corp: 24/530b lim: 35 exec/s: 33 rss: 74Mb L: 27/35 MS: 1 CopyPart- 00:11:31.184 [2024-11-26 18:08:08.606418] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.184 [2024-11-26 18:08:08.606565] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.184 [2024-11-26 18:08:08.606700] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.184 [2024-11-26 18:08:08.606951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.606977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.607036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000103 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.607050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.184 [2024-11-26 18:08:08.607105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.184 [2024-11-26 18:08:08.607118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.184 #34 NEW cov: 12512 ft: 15518 corp: 25/557b lim: 35 exec/s: 34 rss: 74Mb L: 27/35 MS: 1 ChangeBinInt- 00:11:31.443 [2024-11-26 18:08:08.646775] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.443 [2024-11-26 18:08:08.647162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.647185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.443 [2024-11-26 18:08:08.647240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.647251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.443 [2024-11-26 18:08:08.647307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.647320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.443 [2024-11-26 18:08:08.647381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.647393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.443 #35 NEW cov: 12512 ft: 15579 corp: 26/586b lim: 35 exec/s: 35 rss: 74Mb L: 29/35 MS: 1 CMP- DE: "\000\000"- 00:11:31.443 [2024-11-26 18:08:08.707019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:3300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.707041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.443 [2024-11-26 18:08:08.707098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.707110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.443 #36 NEW cov: 12512 ft: 15581 corp: 27/606b lim: 35 exec/s: 36 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:11:31.443 [2024-11-26 18:08:08.766987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f1e009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.767009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.443 #42 NEW cov: 12512 ft: 15593 corp: 28/617b lim: 35 exec/s: 42 rss: 75Mb L: 11/35 MS: 1 CrossOver- 00:11:31.443 [2024-11-26 18:08:08.827117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8f1e009e cdw11:8f008f8f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.827138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.443 #43 NEW cov: 12512 ft: 15599 corp: 29/629b lim: 35 exec/s: 43 rss: 75Mb L: 12/35 MS: 1 ChangeByte- 00:11:31.443 [2024-11-26 18:08:08.867382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.867404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.443 [2024-11-26 18:08:08.867475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.443 [2024-11-26 18:08:08.867487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.443 #44 NEW cov: 12512 ft: 15643 corp: 30/646b lim: 35 exec/s: 44 rss: 75Mb L: 17/35 MS: 1 EraseBytes- 00:11:31.702 [2024-11-26 18:08:08.907194] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.907345] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.907615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.907640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:08.907696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.907709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.702 #45 NEW cov: 12512 ft: 15676 corp: 31/660b lim: 35 exec/s: 45 rss: 75Mb L: 14/35 MS: 1 EraseBytes- 00:11:31.702 [2024-11-26 18:08:08.967415] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.967569] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.967705] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.967839] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.702 [2024-11-26 18:08:08.968088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.968112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:08.968168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.968181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:08.968237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:007e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.968250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:08.968305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:08.968318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.702 #46 NEW cov: 12512 ft: 15689 corp: 32/688b lim: 35 exec/s: 46 rss: 75Mb L: 28/35 MS: 1 CopyPart- 00:11:31.702 [2024-11-26 18:08:09.028183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8fff009e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:09.028206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:09.028279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:09.028291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:09.028346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:09.028356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.702 [2024-11-26 18:08:09.028415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:8f00ff1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.702 [2024-11-26 18:08:09.028426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.702 #47 NEW cov: 12512 ft: 15734 corp: 33/722b lim: 35 exec/s: 47 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:11:31.702 [2024-11-26 18:08:09.067968] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.703 [2024-11-26 18:08:09.068123] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.703 [2024-11-26 18:08:09.068380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.068402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.703 [2024-11-26 18:08:09.068458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.068470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.703 [2024-11-26 18:08:09.068525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.068538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.703 [2024-11-26 18:08:09.068593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.068606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.703 #48 NEW cov: 12512 ft: 15740 corp: 34/754b lim: 35 exec/s: 48 rss: 75Mb L: 32/35 MS: 1 ShuffleBytes- 00:11:31.703 [2024-11-26 18:08:09.127813] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.703 [2024-11-26 18:08:09.128210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.128234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.703 [2024-11-26 18:08:09.128294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000023 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.703 [2024-11-26 18:08:09.128306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 #49 NEW cov: 12512 ft: 15745 corp: 35/768b lim: 35 exec/s: 49 rss: 75Mb L: 14/35 MS: 1 ChangeByte- 00:11:31.961 [2024-11-26 18:08:09.187988] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.188142] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.188524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.188549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.188606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.188619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.188676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000009c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.188687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.961 #50 NEW cov: 12512 ft: 15765 corp: 36/795b lim: 35 exec/s: 50 rss: 75Mb L: 27/35 MS: 1 ChangeByte- 00:11:31.961 [2024-11-26 18:08:09.228082] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.228237] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.228494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.228518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.228576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.228590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 #51 NEW cov: 12512 ft: 15771 corp: 37/809b lim: 35 exec/s: 51 rss: 75Mb L: 14/35 MS: 1 CopyPart- 00:11:31.961 [2024-11-26 18:08:09.268228] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.268391] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.268526] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.268897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.268921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.268981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.268994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.269050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.269066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.269123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.269134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.961 #52 NEW cov: 12512 ft: 15793 corp: 38/838b lim: 35 exec/s: 52 rss: 75Mb L: 29/35 MS: 1 EraseBytes- 00:11:31.961 [2024-11-26 18:08:09.328412] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.328566] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.328700] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.328833] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.329077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.329101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.329158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.329171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.329226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:9c000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.329239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.329295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.329308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:31.961 #53 NEW cov: 12512 ft: 15809 corp: 39/866b lim: 35 exec/s: 53 rss: 75Mb L: 28/35 MS: 1 CopyPart- 00:11:31.961 [2024-11-26 18:08:09.388817] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.388967] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:31.961 [2024-11-26 18:08:09.389223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.389246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.389305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.389317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.389378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000016 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.389392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:31.961 [2024-11-26 18:08:09.389450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:31.961 [2024-11-26 18:08:09.389466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:32.220 #54 NEW cov: 12512 ft: 15825 corp: 40/898b lim: 35 exec/s: 27 rss: 75Mb L: 32/35 MS: 1 ChangeByte- 00:11:32.220 #54 DONE cov: 12512 ft: 15825 corp: 40/898b lim: 35 exec/s: 27 rss: 75Mb 00:11:32.220 ###### Recommended dictionary. ###### 00:11:32.220 "\377[" # Uses: 3 00:11:32.220 "\000\000" # Uses: 0 00:11:32.220 ###### End of recommended dictionary. ###### 00:11:32.220 Done 54 runs in 2 second(s) 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:32.220 18:08:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:11:32.220 [2024-11-26 18:08:09.596927] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:32.220 [2024-11-26 18:08:09.596988] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286985 ] 00:11:32.480 [2024-11-26 18:08:09.818091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.480 [2024-11-26 18:08:09.857980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.480 [2024-11-26 18:08:09.920400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.739 [2024-11-26 18:08:09.936585] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:11:32.739 INFO: Running with entropic power schedule (0xFF, 100). 00:11:32.739 INFO: Seed: 2703436875 00:11:32.739 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:32.739 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:32.739 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:32.739 INFO: A corpus is not provided, starting from an empty corpus 00:11:32.739 #2 INITED exec/s: 0 rss: 65Mb 00:11:32.739 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:32.739 This may also happen if the target rejected all inputs we tried so far 00:11:32.998 NEW_FUNC[1/705]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:11:32.998 NEW_FUNC[2/705]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:32.998 #4 NEW cov: 12169 ft: 12164 corp: 2/7b lim: 20 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 CrossOver-CMP- DE: "\001\000\000\001"- 00:11:32.998 #8 NEW cov: 12282 ft: 12671 corp: 3/11b lim: 20 exec/s: 0 rss: 73Mb L: 4/6 MS: 4 ShuffleBytes-CrossOver-CrossOver-CrossOver- 00:11:32.998 #9 NEW cov: 12288 ft: 13117 corp: 4/18b lim: 20 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertByte- 00:11:32.998 #10 NEW cov: 12398 ft: 13819 corp: 5/38b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:11:33.257 NEW_FUNC[1/4]: 0x1379248 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:11:33.257 NEW_FUNC[2/4]: 0x1379dc8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:11:33.257 #11 NEW cov: 12481 ft: 13994 corp: 6/58b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:11:33.257 #12 NEW cov: 12481 ft: 14053 corp: 7/78b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:11:33.257 #13 NEW cov: 12481 ft: 14095 corp: 8/98b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:11:33.516 #14 NEW cov: 12489 ft: 14297 corp: 9/111b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 1 CopyPart- 00:11:33.516 #15 NEW cov: 12489 ft: 14326 corp: 10/131b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:33.516 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:33.516 #16 NEW cov: 12512 ft: 14486 corp: 11/151b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:33.816 #17 NEW cov: 12512 ft: 14526 corp: 12/171b lim: 20 exec/s: 17 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:11:33.816 #18 NEW cov: 12512 ft: 14551 corp: 13/191b lim: 20 exec/s: 18 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:33.816 #19 NEW cov: 12512 ft: 14562 corp: 14/211b lim: 20 exec/s: 19 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:33.816 #20 NEW cov: 12512 ft: 14613 corp: 15/231b lim: 20 exec/s: 20 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:11:33.816 [2024-11-26 18:08:11.222061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.816 [2024-11-26 18:08:11.222106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:34.099 NEW_FUNC[1/15]: 0x186c8c8 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3300 00:11:34.099 NEW_FUNC[2/15]: 0x1891338 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3260 00:11:34.099 #21 NEW cov: 12731 ft: 14890 corp: 16/251b lim: 20 exec/s: 21 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:34.099 #22 NEW cov: 12731 ft: 14908 corp: 17/271b lim: 20 exec/s: 22 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:11:34.099 #23 NEW cov: 12731 ft: 14945 corp: 18/291b lim: 20 exec/s: 23 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:34.099 #24 NEW cov: 12731 ft: 14982 corp: 19/311b lim: 20 exec/s: 24 rss: 74Mb L: 20/20 MS: 1 CMP- DE: "\267\202+\003DF\205\000"- 00:11:34.377 #25 NEW cov: 12731 ft: 14994 corp: 20/331b lim: 20 exec/s: 25 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:34.377 #26 NEW cov: 12731 ft: 15004 corp: 21/351b lim: 20 exec/s: 26 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:11:34.377 #27 NEW cov: 12731 ft: 15012 corp: 22/371b lim: 20 exec/s: 27 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:11:34.651 #28 NEW cov: 12731 ft: 15036 corp: 23/391b lim: 20 exec/s: 28 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:11:34.652 #29 NEW cov: 12731 ft: 15044 corp: 24/411b lim: 20 exec/s: 29 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:11:34.652 #30 NEW cov: 12731 ft: 15063 corp: 25/431b lim: 20 exec/s: 15 rss: 75Mb L: 20/20 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:11:34.652 #30 DONE cov: 12731 ft: 15063 corp: 25/431b lim: 20 exec/s: 15 rss: 75Mb 00:11:34.652 ###### Recommended dictionary. ###### 00:11:34.652 "\001\000\000\001" # Uses: 3 00:11:34.652 "\267\202+\003DF\205\000" # Uses: 0 00:11:34.652 ###### End of recommended dictionary. ###### 00:11:34.652 Done 30 runs in 2 second(s) 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:34.923 18:08:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:11:34.923 [2024-11-26 18:08:12.174233] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:34.923 [2024-11-26 18:08:12.174312] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287509 ] 00:11:35.212 [2024-11-26 18:08:12.384336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.212 [2024-11-26 18:08:12.425049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.212 [2024-11-26 18:08:12.487422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.212 [2024-11-26 18:08:12.503595] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:11:35.212 INFO: Running with entropic power schedule (0xFF, 100). 00:11:35.212 INFO: Seed: 974493580 00:11:35.212 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:35.212 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:35.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:35.212 INFO: A corpus is not provided, starting from an empty corpus 00:11:35.212 #2 INITED exec/s: 0 rss: 66Mb 00:11:35.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:35.212 This may also happen if the target rejected all inputs we tried so far 00:11:35.212 [2024-11-26 18:08:12.559547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.212 [2024-11-26 18:08:12.559585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.212 [2024-11-26 18:08:12.559643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.212 [2024-11-26 18:08:12.559658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.504 NEW_FUNC[1/717]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:11:35.504 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:35.504 #13 NEW cov: 12296 ft: 12295 corp: 2/21b lim: 35 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:11:35.504 [2024-11-26 18:08:12.770753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.770802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.770879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.770900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.770973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.770992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.771067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.771087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.505 #15 NEW cov: 12409 ft: 13289 corp: 3/51b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 2 InsertByte-InsertRepeatedBytes- 00:11:35.505 [2024-11-26 18:08:12.819930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.819958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.505 #16 NEW cov: 12415 ft: 14185 corp: 4/61b lim: 35 exec/s: 0 rss: 74Mb L: 10/30 MS: 1 EraseBytes- 00:11:35.505 [2024-11-26 18:08:12.890645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.890673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.890732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.890746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.890802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.890815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.505 [2024-11-26 18:08:12.890872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff0c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.505 [2024-11-26 18:08:12.890888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.775 #17 NEW cov: 12500 ft: 14452 corp: 5/92b lim: 35 exec/s: 0 rss: 75Mb L: 31/31 MS: 1 InsertByte- 00:11:35.775 [2024-11-26 18:08:12.960804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.775 [2024-11-26 18:08:12.960831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.775 [2024-11-26 18:08:12.960889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.775 [2024-11-26 18:08:12.960903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.775 [2024-11-26 18:08:12.960960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:12.960973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:12.961026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:12.961039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.776 #18 NEW cov: 12500 ft: 14541 corp: 6/122b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeBit- 00:11:35.776 [2024-11-26 18:08:13.010988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.011015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.011074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.011088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.011145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.011158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.011213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.011226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.776 #19 NEW cov: 12500 ft: 14609 corp: 7/152b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeBit- 00:11:35.776 [2024-11-26 18:08:13.061096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.061122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.061181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.061195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.061250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffffff7 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.061267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.061324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.061338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.776 #20 NEW cov: 12500 ft: 14670 corp: 8/182b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeBinInt- 00:11:35.776 [2024-11-26 18:08:13.101195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.101221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.101277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.101291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.101347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.101360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.101419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.101433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.776 #21 NEW cov: 12500 ft: 14740 corp: 9/212b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeBit- 00:11:35.776 [2024-11-26 18:08:13.141305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.141331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.141391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.141405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.141462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff01ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.141475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.141531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:000a0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.141544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:35.776 #22 NEW cov: 12500 ft: 14760 corp: 10/242b lim: 35 exec/s: 0 rss: 75Mb L: 30/31 MS: 1 ChangeBinInt- 00:11:35.776 [2024-11-26 18:08:13.181433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.181459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.181515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff4242 cdw11:fff70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.181533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.181587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.181599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:35.776 [2024-11-26 18:08:13.181653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:35.776 [2024-11-26 18:08:13.181666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.055 #23 NEW cov: 12500 ft: 14784 corp: 11/276b lim: 35 exec/s: 0 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:11:36.055 [2024-11-26 18:08:13.251647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.251673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.251730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.251744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.251801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.251814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.251870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.251883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.055 #24 NEW cov: 12500 ft: 14865 corp: 12/308b lim: 35 exec/s: 0 rss: 75Mb L: 32/34 MS: 1 CopyPart- 00:11:36.055 [2024-11-26 18:08:13.321659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.321685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.321743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff4242 cdw11:fff70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.321757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.321814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.321827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.055 #25 NEW cov: 12500 ft: 15097 corp: 13/331b lim: 35 exec/s: 0 rss: 75Mb L: 23/34 MS: 1 CrossOver- 00:11:36.055 [2024-11-26 18:08:13.391877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.391902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.391960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.391978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.392033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.392047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.055 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:36.055 #26 NEW cov: 12523 ft: 15166 corp: 14/354b lim: 35 exec/s: 0 rss: 75Mb L: 23/34 MS: 1 EraseBytes- 00:11:36.055 [2024-11-26 18:08:13.472263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.472290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.472348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.472362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.472422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.472435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.055 [2024-11-26 18:08:13.472491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.055 [2024-11-26 18:08:13.472504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.332 #27 NEW cov: 12523 ft: 15198 corp: 15/386b lim: 35 exec/s: 0 rss: 75Mb L: 32/34 MS: 1 ShuffleBytes- 00:11:36.332 [2024-11-26 18:08:13.542466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.542493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.542552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff0b4242 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.542565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.542621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.542634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.542689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.542702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.332 #28 NEW cov: 12523 ft: 15219 corp: 16/420b lim: 35 exec/s: 28 rss: 75Mb L: 34/34 MS: 1 ChangeBinInt- 00:11:36.332 [2024-11-26 18:08:13.592585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.592611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.592667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.592684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.592739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.592751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.592806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.592819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.332 #29 NEW cov: 12523 ft: 15272 corp: 17/453b lim: 35 exec/s: 29 rss: 75Mb L: 33/34 MS: 1 CopyPart- 00:11:36.332 [2024-11-26 18:08:13.642717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.642744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.642802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.642816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.332 [2024-11-26 18:08:13.642873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.332 [2024-11-26 18:08:13.642886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.642941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.642955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.333 #30 NEW cov: 12523 ft: 15302 corp: 18/484b lim: 35 exec/s: 30 rss: 75Mb L: 31/34 MS: 1 InsertByte- 00:11:36.333 [2024-11-26 18:08:13.692874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.692900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.692960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.692973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.693027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.693040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.693096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:32ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.693109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.333 #31 NEW cov: 12523 ft: 15320 corp: 19/515b lim: 35 exec/s: 31 rss: 75Mb L: 31/34 MS: 1 InsertByte- 00:11:36.333 [2024-11-26 18:08:13.763221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.763253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.763312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:42ff4242 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.763326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.763391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.763405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.763461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.763474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.333 [2024-11-26 18:08:13.763530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.333 [2024-11-26 18:08:13.763543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:36.590 #32 NEW cov: 12523 ft: 15407 corp: 20/550b lim: 35 exec/s: 32 rss: 76Mb L: 35/35 MS: 1 CopyPart- 00:11:36.590 [2024-11-26 18:08:13.833271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.833297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.833353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fffffff7 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.833367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.833428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.833442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.833497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.833510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.590 #33 NEW cov: 12523 ft: 15421 corp: 21/580b lim: 35 exec/s: 33 rss: 76Mb L: 30/35 MS: 1 CopyPart- 00:11:36.590 [2024-11-26 18:08:13.882844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00f7 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.882870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.590 #34 NEW cov: 12523 ft: 15441 corp: 22/590b lim: 35 exec/s: 34 rss: 76Mb L: 10/35 MS: 1 ChangeBinInt- 00:11:36.590 [2024-11-26 18:08:13.953628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.953657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.953714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.953732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.953789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.953803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:13.953858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:13.953873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.590 #35 NEW cov: 12523 ft: 15498 corp: 23/624b lim: 35 exec/s: 35 rss: 76Mb L: 34/35 MS: 1 CopyPart- 00:11:36.590 [2024-11-26 18:08:14.023999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:14.024026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:14.024084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:42ff4242 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:14.024098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:14.024153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:14.024165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:14.024222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:14.024235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.590 [2024-11-26 18:08:14.024289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:42ffffff cdw11:ff070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.590 [2024-11-26 18:08:14.024303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:36.848 #36 NEW cov: 12523 ft: 15504 corp: 24/659b lim: 35 exec/s: 36 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:11:36.848 [2024-11-26 18:08:14.094018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.094045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.094102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.094116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.094174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.094187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.094240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.094256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.848 #37 NEW cov: 12523 ft: 15580 corp: 25/693b lim: 35 exec/s: 37 rss: 76Mb L: 34/35 MS: 1 ShuffleBytes- 00:11:36.848 [2024-11-26 18:08:14.163854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.163881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.163938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.163952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.848 #38 NEW cov: 12523 ft: 15674 corp: 26/709b lim: 35 exec/s: 38 rss: 76Mb L: 16/35 MS: 1 CrossOver- 00:11:36.848 [2024-11-26 18:08:14.214360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.214393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.214449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff0b4242 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.214463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.214517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.214530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.214586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.214599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:36.848 #39 NEW cov: 12523 ft: 15683 corp: 27/743b lim: 35 exec/s: 39 rss: 76Mb L: 34/35 MS: 1 ChangeByte- 00:11:36.848 [2024-11-26 18:08:14.264295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.264322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.264384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.264399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:36.848 [2024-11-26 18:08:14.264452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:36.848 [2024-11-26 18:08:14.264466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.105 #40 NEW cov: 12523 ft: 15698 corp: 28/767b lim: 35 exec/s: 40 rss: 76Mb L: 24/35 MS: 1 EraseBytes- 00:11:37.105 [2024-11-26 18:08:14.314282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff01ffff cdw11:85460002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.314309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.314366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0c58cdad cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.314390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.105 #41 NEW cov: 12523 ft: 15713 corp: 29/783b lim: 35 exec/s: 41 rss: 76Mb L: 16/35 MS: 1 CMP- DE: "\001\205FE\315\255\014X"- 00:11:37.105 [2024-11-26 18:08:14.385045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8546ff01 cdw11:45cd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.385071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.385128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:42ff0c58 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.385142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.385199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.385212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.385270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.385283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.385338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:42ffffff cdw11:ff070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.385352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:37.105 #42 NEW cov: 12523 ft: 15724 corp: 30/818b lim: 35 exec/s: 42 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\205FE\315\255\014X"- 00:11:37.105 [2024-11-26 18:08:14.455230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.455256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.455314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:42014242 cdw11:85460002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.455328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.455390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0c58cdad cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.455404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.455460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.455473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.455525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:42ffffff cdw11:ff070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.455538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:37.105 #43 NEW cov: 12523 ft: 15732 corp: 31/853b lim: 35 exec/s: 43 rss: 76Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\205FE\315\255\014X"- 00:11:37.105 [2024-11-26 18:08:14.504777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff01ffff cdw11:95460002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.504804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.105 [2024-11-26 18:08:14.504862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0c58cdad cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.105 [2024-11-26 18:08:14.504875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.363 #44 NEW cov: 12523 ft: 15750 corp: 32/869b lim: 35 exec/s: 22 rss: 76Mb L: 16/35 MS: 1 ChangeBit- 00:11:37.363 #44 DONE cov: 12523 ft: 15750 corp: 32/869b lim: 35 exec/s: 22 rss: 76Mb 00:11:37.363 ###### Recommended dictionary. ###### 00:11:37.363 "\001\205FE\315\255\014X" # Uses: 2 00:11:37.363 ###### End of recommended dictionary. ###### 00:11:37.363 Done 44 runs in 2 second(s) 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:37.363 18:08:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:11:37.363 [2024-11-26 18:08:14.725087] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:37.363 [2024-11-26 18:08:14.725146] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288030 ] 00:11:37.620 [2024-11-26 18:08:14.936848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.620 [2024-11-26 18:08:14.976881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.620 [2024-11-26 18:08:15.039223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.620 [2024-11-26 18:08:15.055421] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:11:37.878 INFO: Running with entropic power schedule (0xFF, 100). 00:11:37.878 INFO: Seed: 3524462640 00:11:37.878 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:37.878 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:37.878 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:37.878 INFO: A corpus is not provided, starting from an empty corpus 00:11:37.878 #2 INITED exec/s: 0 rss: 65Mb 00:11:37.878 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:37.878 This may also happen if the target rejected all inputs we tried so far 00:11:37.878 [2024-11-26 18:08:15.103397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.103424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.878 [2024-11-26 18:08:15.103495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.103507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.878 NEW_FUNC[1/717]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:11:37.878 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:37.878 #12 NEW cov: 12307 ft: 12306 corp: 2/19b lim: 45 exec/s: 0 rss: 73Mb L: 18/18 MS: 5 ChangeBit-ChangeByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:11:37.878 [2024-11-26 18:08:15.253815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.253841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.878 [2024-11-26 18:08:15.253909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.253921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:37.878 [2024-11-26 18:08:15.253969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:31912222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.253980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:37.878 #13 NEW cov: 12420 ft: 13233 corp: 3/53b lim: 45 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:11:37.878 [2024-11-26 18:08:15.313779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.313801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:37.878 [2024-11-26 18:08:15.313867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:37.878 [2024-11-26 18:08:15.313878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #14 NEW cov: 12426 ft: 13399 corp: 4/71b lim: 45 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 ChangeBit- 00:11:38.136 [2024-11-26 18:08:15.353859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2222c922 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.353882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.136 [2024-11-26 18:08:15.353934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.353949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #16 NEW cov: 12511 ft: 13660 corp: 5/90b lim: 45 exec/s: 0 rss: 73Mb L: 19/34 MS: 2 InsertByte-CrossOver- 00:11:38.136 [2024-11-26 18:08:15.394009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.394033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.136 [2024-11-26 18:08:15.394084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22012222 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.394095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #17 NEW cov: 12511 ft: 13747 corp: 6/116b lim: 45 exec/s: 0 rss: 73Mb L: 26/34 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:11:38.136 [2024-11-26 18:08:15.434095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.434118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.136 [2024-11-26 18:08:15.434182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.434193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #18 NEW cov: 12511 ft: 13868 corp: 7/134b lim: 45 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 ChangeBit- 00:11:38.136 [2024-11-26 18:08:15.494305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.494329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.136 [2024-11-26 18:08:15.494380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.494392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #19 NEW cov: 12511 ft: 13971 corp: 8/152b lim: 45 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 CopyPart- 00:11:38.136 [2024-11-26 18:08:15.534398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:d4dd2222 cdw11:dddd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.534420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.136 [2024-11-26 18:08:15.534472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.534483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.136 #20 NEW cov: 12511 ft: 14006 corp: 9/170b lim: 45 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 ChangeBinInt- 00:11:38.136 [2024-11-26 18:08:15.574318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.136 [2024-11-26 18:08:15.574340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.394 #21 NEW cov: 12511 ft: 14754 corp: 10/179b lim: 45 exec/s: 0 rss: 73Mb L: 9/34 MS: 1 EraseBytes- 00:11:38.394 [2024-11-26 18:08:15.614628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.394 [2024-11-26 18:08:15.614657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.394 [2024-11-26 18:08:15.614707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22016422 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.614718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.395 #22 NEW cov: 12511 ft: 14783 corp: 11/205b lim: 45 exec/s: 0 rss: 73Mb L: 26/34 MS: 1 ChangeByte- 00:11:38.395 [2024-11-26 18:08:15.674808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.674831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.674881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.674891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.395 #23 NEW cov: 12511 ft: 14797 corp: 12/223b lim: 45 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 CrossOver- 00:11:38.395 [2024-11-26 18:08:15.735280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.735302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.735368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.735385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.735436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.735447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.735495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.735506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.395 #25 NEW cov: 12511 ft: 15140 corp: 13/267b lim: 45 exec/s: 0 rss: 73Mb L: 44/44 MS: 2 InsertByte-InsertRepeatedBytes- 00:11:38.395 [2024-11-26 18:08:15.775072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00122222 cdw11:dddd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.775094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.775162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.775173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.395 #26 NEW cov: 12511 ft: 15161 corp: 14/285b lim: 45 exec/s: 0 rss: 74Mb L: 18/44 MS: 1 ChangeBinInt- 00:11:38.395 [2024-11-26 18:08:15.835600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.835622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.835694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22012222 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.835705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.835754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.835765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.395 [2024-11-26 18:08:15.835815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.395 [2024-11-26 18:08:15.835826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.655 #27 NEW cov: 12511 ft: 15246 corp: 15/322b lim: 45 exec/s: 0 rss: 74Mb L: 37/44 MS: 1 InsertRepeatedBytes- 00:11:38.655 [2024-11-26 18:08:15.875417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.875439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.655 [2024-11-26 18:08:15.875505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.875517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.655 #28 NEW cov: 12511 ft: 15309 corp: 16/340b lim: 45 exec/s: 0 rss: 74Mb L: 18/44 MS: 1 ChangeASCIIInt- 00:11:38.655 [2024-11-26 18:08:15.915840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.915862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.655 [2024-11-26 18:08:15.915915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22012222 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.915926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.655 [2024-11-26 18:08:15.915976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.915986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.655 [2024-11-26 18:08:15.916035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.916045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.655 #29 NEW cov: 12511 ft: 15353 corp: 17/377b lim: 45 exec/s: 0 rss: 74Mb L: 37/44 MS: 1 CopyPart- 00:11:38.655 [2024-11-26 18:08:15.975558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.655 [2024-11-26 18:08:15.975588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.656 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:38.656 #30 NEW cov: 12534 ft: 15390 corp: 18/393b lim: 45 exec/s: 0 rss: 74Mb L: 16/44 MS: 1 EraseBytes- 00:11:38.656 [2024-11-26 18:08:16.036345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.036370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.036425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.036436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.036487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a10aa1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.036497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.036548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.036557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.036607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.036618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:38.656 #31 NEW cov: 12534 ft: 15455 corp: 19/438b lim: 45 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 CrossOver- 00:11:38.656 [2024-11-26 18:08:16.096175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.096196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.096261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.096272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.656 [2024-11-26 18:08:16.096320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.656 [2024-11-26 18:08:16.096331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.914 #32 NEW cov: 12534 ft: 15459 corp: 20/468b lim: 45 exec/s: 32 rss: 74Mb L: 30/45 MS: 1 CopyPart- 00:11:38.914 [2024-11-26 18:08:16.136069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.136091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.136158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.136169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.914 #33 NEW cov: 12534 ft: 15462 corp: 21/487b lim: 45 exec/s: 33 rss: 74Mb L: 19/45 MS: 1 CMP- DE: "\001\205FF\362T\354\276"- 00:11:38.914 [2024-11-26 18:08:16.196625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.196647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.196699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.196713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.196765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.196776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.196823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.196834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.914 #34 NEW cov: 12534 ft: 15474 corp: 22/531b lim: 45 exec/s: 34 rss: 74Mb L: 44/45 MS: 1 ChangeByte- 00:11:38.914 [2024-11-26 18:08:16.236390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.236411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.236478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.236489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.914 #35 NEW cov: 12534 ft: 15489 corp: 23/550b lim: 45 exec/s: 35 rss: 74Mb L: 19/45 MS: 1 ShuffleBytes- 00:11:38.914 [2024-11-26 18:08:16.296586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.296608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.296674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22252222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.296685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.914 #36 NEW cov: 12534 ft: 15510 corp: 24/569b lim: 45 exec/s: 36 rss: 74Mb L: 19/45 MS: 1 ChangeBinInt- 00:11:38.914 [2024-11-26 18:08:16.357288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.357309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.357359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.357370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:38.914 [2024-11-26 18:08:16.357421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a10aa1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.914 [2024-11-26 18:08:16.357432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:38.915 [2024-11-26 18:08:16.357483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.915 [2024-11-26 18:08:16.357493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:38.915 [2024-11-26 18:08:16.357541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:38.915 [2024-11-26 18:08:16.357554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:39.173 #37 NEW cov: 12534 ft: 15519 corp: 25/614b lim: 45 exec/s: 37 rss: 75Mb L: 45/45 MS: 1 ChangeBit- 00:11:39.173 [2024-11-26 18:08:16.417449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.417470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.417539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.417560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.417612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.417622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.417673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.417683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.417733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.417742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:39.173 #38 NEW cov: 12534 ft: 15528 corp: 26/659b lim: 45 exec/s: 38 rss: 75Mb L: 45/45 MS: 1 CopyPart- 00:11:39.173 [2024-11-26 18:08:16.477607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.477629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.477680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.477691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.477739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a10aa1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.477750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.477799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.477810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.477859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.477869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:39.173 #39 NEW cov: 12534 ft: 15541 corp: 27/704b lim: 45 exec/s: 39 rss: 75Mb L: 45/45 MS: 1 ShuffleBytes- 00:11:39.173 [2024-11-26 18:08:16.517515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.517539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.517592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22012222 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.517602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.517651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.517661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.517709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.517719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:39.173 #40 NEW cov: 12534 ft: 15571 corp: 28/741b lim: 45 exec/s: 40 rss: 75Mb L: 37/45 MS: 1 ShuffleBytes- 00:11:39.173 [2024-11-26 18:08:16.577292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.577313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.173 [2024-11-26 18:08:16.577385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22bd2222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.173 [2024-11-26 18:08:16.577396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.173 #41 NEW cov: 12534 ft: 15606 corp: 29/759b lim: 45 exec/s: 41 rss: 75Mb L: 18/45 MS: 1 ChangeByte- 00:11:39.432 [2024-11-26 18:08:16.637527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.637550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.432 [2024-11-26 18:08:16.637601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22012222 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.637611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.432 #42 NEW cov: 12534 ft: 15624 corp: 30/785b lim: 45 exec/s: 42 rss: 75Mb L: 26/45 MS: 1 CopyPart- 00:11:39.432 [2024-11-26 18:08:16.677608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.677630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.432 [2024-11-26 18:08:16.677681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22252222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.677692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.432 #43 NEW cov: 12534 ft: 15632 corp: 31/804b lim: 45 exec/s: 43 rss: 75Mb L: 19/45 MS: 1 ChangeBinInt- 00:11:39.432 [2024-11-26 18:08:16.727595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.727617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.432 #44 NEW cov: 12534 ft: 15640 corp: 32/820b lim: 45 exec/s: 44 rss: 75Mb L: 16/45 MS: 1 ShuffleBytes- 00:11:39.432 [2024-11-26 18:08:16.787771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:01220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.787794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.432 #45 NEW cov: 12534 ft: 15654 corp: 33/836b lim: 45 exec/s: 45 rss: 75Mb L: 16/45 MS: 1 ShuffleBytes- 00:11:39.432 [2024-11-26 18:08:16.848143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.848166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.432 [2024-11-26 18:08:16.848233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22252222 cdw11:be220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.432 [2024-11-26 18:08:16.848243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.691 #46 NEW cov: 12534 ft: 15658 corp: 34/855b lim: 45 exec/s: 46 rss: 75Mb L: 19/45 MS: 1 CopyPart- 00:11:39.691 [2024-11-26 18:08:16.908633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.908655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:16.908706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.908717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:16.908768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.908779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:16.908844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:91222231 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.908855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:39.691 #47 NEW cov: 12534 ft: 15670 corp: 35/897b lim: 45 exec/s: 47 rss: 75Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:11:39.691 [2024-11-26 18:08:16.968438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:46460185 cdw11:f2540007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.968463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:16.968538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01852222 cdw11:46460007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:16.968551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.691 #48 NEW cov: 12534 ft: 15718 corp: 36/916b lim: 45 exec/s: 48 rss: 75Mb L: 19/45 MS: 1 PersAutoDict- DE: "\001\205FF\362T\354\276"- 00:11:39.691 [2024-11-26 18:08:17.008573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.008596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:17.008663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:91222231 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.008677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.691 #49 NEW cov: 12534 ft: 15750 corp: 37/934b lim: 45 exec/s: 49 rss: 75Mb L: 18/45 MS: 1 CopyPart- 00:11:39.691 [2024-11-26 18:08:17.069254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.069277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:17.069347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.069358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:17.069413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a10aa1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.069427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:17.069475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.069485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:39.691 [2024-11-26 18:08:17.069532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:a1a1a1a1 cdw11:a1a10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:39.691 [2024-11-26 18:08:17.069542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:39.691 #50 NEW cov: 12534 ft: 15759 corp: 38/979b lim: 45 exec/s: 25 rss: 75Mb L: 45/45 MS: 1 ChangeByte- 00:11:39.691 #50 DONE cov: 12534 ft: 15759 corp: 38/979b lim: 45 exec/s: 25 rss: 75Mb 00:11:39.691 ###### Recommended dictionary. ###### 00:11:39.691 "\001\000\000\000\000\000\000\000" # Uses: 0 00:11:39.691 "\001\205FF\362T\354\276" # Uses: 1 00:11:39.691 ###### End of recommended dictionary. ###### 00:11:39.691 Done 50 runs in 2 second(s) 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:39.950 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:11:39.951 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:39.951 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:39.951 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:39.951 18:08:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:11:39.951 [2024-11-26 18:08:17.249678] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:39.951 [2024-11-26 18:08:17.249737] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288552 ] 00:11:40.208 [2024-11-26 18:08:17.466478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.208 [2024-11-26 18:08:17.506705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.208 [2024-11-26 18:08:17.569057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.208 [2024-11-26 18:08:17.585242] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:11:40.208 INFO: Running with entropic power schedule (0xFF, 100). 00:11:40.208 INFO: Seed: 1761496911 00:11:40.208 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:40.208 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:40.208 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:40.208 INFO: A corpus is not provided, starting from an empty corpus 00:11:40.208 #2 INITED exec/s: 0 rss: 65Mb 00:11:40.208 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:40.208 This may also happen if the target rejected all inputs we tried so far 00:11:40.208 [2024-11-26 18:08:17.630893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:11:40.209 [2024-11-26 18:08:17.630920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.465 NEW_FUNC[1/715]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:11:40.465 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:40.465 #3 NEW cov: 12224 ft: 12214 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:11:40.465 [2024-11-26 18:08:17.781222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f6fe cdw11:00000000 00:11:40.465 [2024-11-26 18:08:17.781249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.465 #4 NEW cov: 12337 ft: 12738 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBinInt- 00:11:40.465 [2024-11-26 18:08:17.841341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6fe cdw11:00000000 00:11:40.465 [2024-11-26 18:08:17.841364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.465 #5 NEW cov: 12343 ft: 12970 corp: 4/7b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:11:40.465 [2024-11-26 18:08:17.891463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b67e cdw11:00000000 00:11:40.465 [2024-11-26 18:08:17.891487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #8 NEW cov: 12428 ft: 13386 corp: 5/9b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 3 EraseBytes-CopyPart-InsertByte- 00:11:40.723 [2024-11-26 18:08:17.951616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021b6 cdw11:00000000 00:11:40.723 [2024-11-26 18:08:17.951642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #9 NEW cov: 12428 ft: 13525 corp: 6/12b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:11:40.723 [2024-11-26 18:08:17.991744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6de cdw11:00000000 00:11:40.723 [2024-11-26 18:08:17.991766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #12 NEW cov: 12428 ft: 13579 corp: 7/14b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:11:40.723 [2024-11-26 18:08:18.041883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6fe cdw11:00000000 00:11:40.723 [2024-11-26 18:08:18.041904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #13 NEW cov: 12428 ft: 13676 corp: 8/16b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 EraseBytes- 00:11:40.723 [2024-11-26 18:08:18.102077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b632 cdw11:00000000 00:11:40.723 [2024-11-26 18:08:18.102099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #14 NEW cov: 12428 ft: 13681 corp: 9/18b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:11:40.723 [2024-11-26 18:08:18.142182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007d0a cdw11:00000000 00:11:40.723 [2024-11-26 18:08:18.142205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.723 #15 NEW cov: 12428 ft: 13693 corp: 10/20b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 InsertByte- 00:11:40.982 [2024-11-26 18:08:18.182301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feb6 cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.182324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.982 #16 NEW cov: 12428 ft: 13758 corp: 11/23b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ShuffleBytes- 00:11:40.982 [2024-11-26 18:08:18.222877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6ff cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.222900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.222954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.222965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.223018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.223028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.223080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fffe cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.223091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:40.982 #17 NEW cov: 12428 ft: 14073 corp: 12/31b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:11:40.982 [2024-11-26 18:08:18.282581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feb6 cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.282604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.982 #18 NEW cov: 12428 ft: 14097 corp: 13/34b lim: 10 exec/s: 0 rss: 72Mb L: 3/8 MS: 1 CopyPart- 00:11:40.982 [2024-11-26 18:08:18.343113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005bac cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.343135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.343206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000acac cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.343217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.343268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000acac cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.343279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:40.982 #21 NEW cov: 12428 ft: 14288 corp: 14/41b lim: 10 exec/s: 0 rss: 72Mb L: 7/8 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:11:40.982 [2024-11-26 18:08:18.383017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8fe cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.383039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:40.982 [2024-11-26 18:08:18.383090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b621 cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.383101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:40.982 #22 NEW cov: 12428 ft: 14492 corp: 15/45b lim: 10 exec/s: 0 rss: 72Mb L: 4/8 MS: 1 InsertByte- 00:11:40.982 [2024-11-26 18:08:18.423011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:11:40.982 [2024-11-26 18:08:18.423034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.240 #23 NEW cov: 12428 ft: 14501 corp: 16/47b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 InsertByte- 00:11:41.240 [2024-11-26 18:08:18.463449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:41.240 [2024-11-26 18:08:18.463476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.240 [2024-11-26 18:08:18.463552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:41.240 [2024-11-26 18:08:18.463565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.240 [2024-11-26 18:08:18.463625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a3b cdw11:00000000 00:11:41.240 [2024-11-26 18:08:18.463636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:41.241 #24 NEW cov: 12428 ft: 14508 corp: 17/53b lim: 10 exec/s: 0 rss: 73Mb L: 6/8 MS: 1 InsertRepeatedBytes- 00:11:41.241 [2024-11-26 18:08:18.523731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005400 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.523753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.241 [2024-11-26 18:08:18.523819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.523831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.241 [2024-11-26 18:08:18.523883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.523893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:41.241 [2024-11-26 18:08:18.523947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.523957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:41.241 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:41.241 #25 NEW cov: 12451 ft: 14535 corp: 18/61b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:11:41.241 [2024-11-26 18:08:18.583491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b7fe cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.583514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.241 #26 NEW cov: 12451 ft: 14597 corp: 19/63b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 1 ChangeByte- 00:11:41.241 [2024-11-26 18:08:18.623615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002127 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.623637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.241 #27 NEW cov: 12451 ft: 14611 corp: 20/66b lim: 10 exec/s: 27 rss: 73Mb L: 3/8 MS: 1 ChangeByte- 00:11:41.241 [2024-11-26 18:08:18.663751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002127 cdw11:00000000 00:11:41.241 [2024-11-26 18:08:18.663773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 #28 NEW cov: 12451 ft: 14643 corp: 21/69b lim: 10 exec/s: 28 rss: 73Mb L: 3/8 MS: 1 ShuffleBytes- 00:11:41.499 [2024-11-26 18:08:18.723911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021b6 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.723933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 #29 NEW cov: 12451 ft: 14690 corp: 22/72b lim: 10 exec/s: 29 rss: 73Mb L: 3/8 MS: 1 ChangeBit- 00:11:41.499 [2024-11-26 18:08:18.764614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.764636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.764689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.764699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.764749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006666 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.764760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.764812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006666 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.764822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.764872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a3b cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.764883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:41.499 #30 NEW cov: 12451 ft: 14814 corp: 23/82b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:41.499 [2024-11-26 18:08:18.824221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6ac cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.824243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 #31 NEW cov: 12451 ft: 14826 corp: 24/84b lim: 10 exec/s: 31 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:11:41.499 [2024-11-26 18:08:18.864454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feb6 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.864476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.864543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000de21 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.864554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.499 #32 NEW cov: 12451 ft: 14838 corp: 25/88b lim: 10 exec/s: 32 rss: 73Mb L: 4/10 MS: 1 CrossOver- 00:11:41.499 [2024-11-26 18:08:18.924649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.924672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.499 [2024-11-26 18:08:18.924726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000b6 cdw11:00000000 00:11:41.499 [2024-11-26 18:08:18.924736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.757 #33 NEW cov: 12451 ft: 14853 corp: 26/93b lim: 10 exec/s: 33 rss: 73Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:11:41.757 [2024-11-26 18:08:18.984733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6b6 cdw11:00000000 00:11:41.757 [2024-11-26 18:08:18.984755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.757 #34 NEW cov: 12451 ft: 14861 corp: 27/95b lim: 10 exec/s: 34 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:11:41.757 [2024-11-26 18:08:19.024798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b60a cdw11:00000000 00:11:41.757 [2024-11-26 18:08:19.024820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.757 #35 NEW cov: 12451 ft: 14867 corp: 28/97b lim: 10 exec/s: 35 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:11:41.757 [2024-11-26 18:08:19.064874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7d cdw11:00000000 00:11:41.757 [2024-11-26 18:08:19.064896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.757 #36 NEW cov: 12451 ft: 14882 corp: 29/99b lim: 10 exec/s: 36 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:11:41.757 [2024-11-26 18:08:19.115004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021b6 cdw11:00000000 00:11:41.757 [2024-11-26 18:08:19.115025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.757 #37 NEW cov: 12451 ft: 14914 corp: 30/102b lim: 10 exec/s: 37 rss: 73Mb L: 3/10 MS: 1 ShuffleBytes- 00:11:41.757 [2024-11-26 18:08:19.145277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021b6 cdw11:00000000 00:11:41.757 [2024-11-26 18:08:19.145298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:41.757 [2024-11-26 18:08:19.145349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000feff cdw11:00000000 00:11:41.757 [2024-11-26 18:08:19.145360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:41.757 #38 NEW cov: 12451 ft: 14926 corp: 31/107b lim: 10 exec/s: 38 rss: 73Mb L: 5/10 MS: 1 CrossOver- 00:11:42.016 [2024-11-26 18:08:19.205590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002121 cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.205617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.205670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b6fe cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.205680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.205736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000b6fe cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.205746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:42.016 #39 NEW cov: 12451 ft: 14931 corp: 32/113b lim: 10 exec/s: 39 rss: 73Mb L: 6/10 MS: 1 CopyPart- 00:11:42.016 [2024-11-26 18:08:19.245861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8fe cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.245883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.245950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b6f8 cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.245961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.246012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000feb6 cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.246023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.246071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002121 cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.246081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:42.016 #40 NEW cov: 12451 ft: 14955 corp: 33/121b lim: 10 exec/s: 40 rss: 74Mb L: 8/10 MS: 1 CopyPart- 00:11:42.016 [2024-11-26 18:08:19.305892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024ac cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.305913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.305981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000acac cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.305992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.306043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000acac cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.306053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:42.016 #41 NEW cov: 12451 ft: 14957 corp: 34/128b lim: 10 exec/s: 41 rss: 74Mb L: 7/10 MS: 1 ChangeByte- 00:11:42.016 [2024-11-26 18:08:19.365892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000011f cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.365915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.365967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b7fe cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.365978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.415999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000011f cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.416020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.016 [2024-11-26 18:08:19.416091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b7ff cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.416102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.016 #43 NEW cov: 12451 ft: 14978 corp: 35/132b lim: 10 exec/s: 43 rss: 74Mb L: 4/10 MS: 2 CMP-ChangeBit- DE: "\001\037"- 00:11:42.016 [2024-11-26 18:08:19.455956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b6fe cdw11:00000000 00:11:42.016 [2024-11-26 18:08:19.455980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.274 #44 NEW cov: 12451 ft: 15010 corp: 36/134b lim: 10 exec/s: 44 rss: 74Mb L: 2/10 MS: 1 CopyPart- 00:11:42.274 [2024-11-26 18:08:19.496103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7d cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.496125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.274 #45 NEW cov: 12451 ft: 15024 corp: 37/136b lim: 10 exec/s: 45 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:11:42.274 [2024-11-26 18:08:19.556409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000011f cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.556431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.274 [2024-11-26 18:08:19.556485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b7ff cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.556496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.274 #46 NEW cov: 12451 ft: 15041 corp: 38/140b lim: 10 exec/s: 46 rss: 74Mb L: 4/10 MS: 1 ShuffleBytes- 00:11:42.274 [2024-11-26 18:08:19.616864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.616885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:42.274 [2024-11-26 18:08:19.616955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.616965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:42.274 [2024-11-26 18:08:19.617018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.617029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:42.274 [2024-11-26 18:08:19.617078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.274 [2024-11-26 18:08:19.617089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:42.274 #47 NEW cov: 12451 ft: 15070 corp: 39/149b lim: 10 exec/s: 23 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:11:42.274 #47 DONE cov: 12451 ft: 15070 corp: 39/149b lim: 10 exec/s: 23 rss: 74Mb 00:11:42.274 ###### Recommended dictionary. ###### 00:11:42.274 "\001\037" # Uses: 0 00:11:42.274 ###### End of recommended dictionary. ###### 00:11:42.274 Done 47 runs in 2 second(s) 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:42.533 18:08:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:11:42.533 [2024-11-26 18:08:19.794326] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:42.533 [2024-11-26 18:08:19.794414] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288899 ] 00:11:42.791 [2024-11-26 18:08:20.014243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.791 [2024-11-26 18:08:20.063101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.791 [2024-11-26 18:08:20.125526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.791 [2024-11-26 18:08:20.141714] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:11:42.791 INFO: Running with entropic power schedule (0xFF, 100). 00:11:42.791 INFO: Seed: 23531509 00:11:42.791 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:42.791 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:42.791 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:11:42.791 INFO: A corpus is not provided, starting from an empty corpus 00:11:42.791 #2 INITED exec/s: 0 rss: 65Mb 00:11:42.791 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:42.791 This may also happen if the target rejected all inputs we tried so far 00:11:42.791 [2024-11-26 18:08:20.187292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:11:42.792 [2024-11-26 18:08:20.187319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.050 NEW_FUNC[1/715]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:11:43.050 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:43.050 #6 NEW cov: 12216 ft: 12215 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 4 ChangeBit-ChangeBit-CopyPart-CrossOver- 00:11:43.050 [2024-11-26 18:08:20.378256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.378288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.050 [2024-11-26 18:08:20.378339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.378350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.050 [2024-11-26 18:08:20.378399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.378410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.050 [2024-11-26 18:08:20.378456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.378466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.050 [2024-11-26 18:08:20.378514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.378524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:43.050 #12 NEW cov: 12337 ft: 13033 corp: 3/13b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:43.050 [2024-11-26 18:08:20.437873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.437896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.050 #13 NEW cov: 12343 ft: 13397 corp: 4/15b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:11:43.050 [2024-11-26 18:08:20.477945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:11:43.050 [2024-11-26 18:08:20.477967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.308 #14 NEW cov: 12428 ft: 13687 corp: 5/18b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:11:43.308 [2024-11-26 18:08:20.538094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000006f6 cdw11:00000000 00:11:43.308 [2024-11-26 18:08:20.538117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.308 #15 NEW cov: 12428 ft: 13794 corp: 6/20b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:11:43.308 [2024-11-26 18:08:20.578239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000006f6 cdw11:00000000 00:11:43.308 [2024-11-26 18:08:20.578262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.308 #16 NEW cov: 12428 ft: 13868 corp: 7/22b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:11:43.308 [2024-11-26 18:08:20.628353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000470a cdw11:00000000 00:11:43.308 [2024-11-26 18:08:20.628380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.308 #17 NEW cov: 12428 ft: 13932 corp: 8/24b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:11:43.308 [2024-11-26 18:08:20.668488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000025 cdw11:00000000 00:11:43.308 [2024-11-26 18:08:20.668509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.308 #18 NEW cov: 12428 ft: 13948 corp: 9/27b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 ChangeByte- 00:11:43.308 [2024-11-26 18:08:20.718646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000025 cdw11:00000000 00:11:43.308 [2024-11-26 18:08:20.718673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 #19 NEW cov: 12428 ft: 13972 corp: 10/30b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:11:43.567 [2024-11-26 18:08:20.768765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.768787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 #20 NEW cov: 12428 ft: 14021 corp: 11/33b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 ChangeByte- 00:11:43.567 [2024-11-26 18:08:20.809296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.809318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.809384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.809396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.809443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.809453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.809501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.809511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.567 #21 NEW cov: 12428 ft: 14051 corp: 12/41b lim: 10 exec/s: 0 rss: 73Mb L: 8/10 MS: 1 EraseBytes- 00:11:43.567 [2024-11-26 18:08:20.869633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000679 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.869655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.869719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007979 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.869730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.869777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007979 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.869787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.869833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00007979 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.869843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.869891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000079f6 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.869901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:43.567 #22 NEW cov: 12428 ft: 14101 corp: 13/51b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:11:43.567 [2024-11-26 18:08:20.909567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000006f6 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.909589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.909653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.909666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.909713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.909724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:20.909770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.909780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.567 #23 NEW cov: 12428 ft: 14115 corp: 14/60b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:11:43.567 [2024-11-26 18:08:20.949285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000006f2 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.949308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 #24 NEW cov: 12428 ft: 14179 corp: 15/62b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:11:43.567 [2024-11-26 18:08:20.999966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:20.999989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:21.000053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:21.000065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:21.000110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:21.000120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:21.000167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:21.000177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.567 [2024-11-26 18:08:21.000225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000079f6 cdw11:00000000 00:11:43.567 [2024-11-26 18:08:21.000235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:43.826 #25 NEW cov: 12428 ft: 14189 corp: 16/72b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:11:43.826 [2024-11-26 18:08:21.060097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.060118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.060184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.060194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.060242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.060252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.060302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f823 cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.060312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.826 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:43.826 #26 NEW cov: 12451 ft: 14223 corp: 17/80b lim: 10 exec/s: 0 rss: 73Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:11:43.826 [2024-11-26 18:08:21.120173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.120196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.120258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.120269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.120316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.120327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:43.826 [2024-11-26 18:08:21.120379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.120391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:43.826 #27 NEW cov: 12451 ft: 14243 corp: 18/88b lim: 10 exec/s: 0 rss: 73Mb L: 8/10 MS: 1 ShuffleBytes- 00:11:43.826 [2024-11-26 18:08:21.180003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.180025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.826 #28 NEW cov: 12451 ft: 14305 corp: 19/91b lim: 10 exec/s: 28 rss: 74Mb L: 3/10 MS: 1 ShuffleBytes- 00:11:43.826 [2024-11-26 18:08:21.220064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000040a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.220086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.826 #29 NEW cov: 12451 ft: 14339 corp: 20/93b lim: 10 exec/s: 29 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:11:43.826 [2024-11-26 18:08:21.260186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000470a cdw11:00000000 00:11:43.826 [2024-11-26 18:08:21.260207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 #30 NEW cov: 12451 ft: 14355 corp: 21/96b lim: 10 exec/s: 30 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:11:44.085 [2024-11-26 18:08:21.320360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000056f2 cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.320392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 #31 NEW cov: 12451 ft: 14377 corp: 22/98b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:11:44.085 [2024-11-26 18:08:21.370522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000024 cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.370543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 #32 NEW cov: 12451 ft: 14392 corp: 23/101b lim: 10 exec/s: 32 rss: 74Mb L: 3/10 MS: 1 ChangeBit- 00:11:44.085 [2024-11-26 18:08:21.421054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.421076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.421139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.421154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.421200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.421210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.421256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000006f2 cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.421266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.085 #33 NEW cov: 12451 ft: 14399 corp: 24/109b lim: 10 exec/s: 33 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:11:44.085 [2024-11-26 18:08:21.461135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.461156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.461219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.461230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.461276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.461285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.461333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.461343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.085 #34 NEW cov: 12451 ft: 14434 corp: 25/117b lim: 10 exec/s: 34 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:11:44.085 [2024-11-26 18:08:21.501262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.501282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.501347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.501358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.501410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000006ff cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.501421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.085 [2024-11-26 18:08:21.501467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fff2 cdw11:00000000 00:11:44.085 [2024-11-26 18:08:21.501477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.344 #35 NEW cov: 12451 ft: 14444 corp: 26/125b lim: 10 exec/s: 35 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:11:44.344 [2024-11-26 18:08:21.561030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000025 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.561052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 #36 NEW cov: 12451 ft: 14495 corp: 27/128b lim: 10 exec/s: 36 rss: 74Mb L: 3/10 MS: 1 ChangeBinInt- 00:11:44.344 [2024-11-26 18:08:21.591559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000013 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.591586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.591634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.591645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.591691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fcf8 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.591701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.591748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f823 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.591758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.344 #37 NEW cov: 12451 ft: 14502 corp: 28/136b lim: 10 exec/s: 37 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:11:44.344 [2024-11-26 18:08:21.641673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003a00 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.641694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.641743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.641754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.641802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a3a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.641812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.641858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003a0a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.641869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.344 #38 NEW cov: 12451 ft: 14512 corp: 29/144b lim: 10 exec/s: 38 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:11:44.344 [2024-11-26 18:08:21.681363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000470a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.681389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 #39 NEW cov: 12451 ft: 14551 corp: 30/147b lim: 10 exec/s: 39 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:11:44.344 [2024-11-26 18:08:21.731559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.731580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 #41 NEW cov: 12451 ft: 14553 corp: 31/149b lim: 10 exec/s: 41 rss: 74Mb L: 2/10 MS: 2 ShuffleBytes-InsertByte- 00:11:44.344 [2024-11-26 18:08:21.772092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000039c0 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.772113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.772161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000927a cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.772172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.772220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004a46 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.772232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.344 [2024-11-26 18:08:21.772280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008500 cdw11:00000000 00:11:44.344 [2024-11-26 18:08:21.772290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.602 #46 NEW cov: 12451 ft: 14571 corp: 32/158b lim: 10 exec/s: 46 rss: 74Mb L: 9/10 MS: 5 EraseBytes-CrossOver-ChangeBit-ChangeByte-CMP- DE: "9\300\222zJF\205\000"- 00:11:44.602 [2024-11-26 18:08:21.812286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.812307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.812370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.812384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.812432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000641 cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.812442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.812489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.812499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.602 #47 NEW cov: 12451 ft: 14585 corp: 33/167b lim: 10 exec/s: 47 rss: 74Mb L: 9/10 MS: 1 InsertByte- 00:11:44.602 [2024-11-26 18:08:21.871987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff08 cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.872008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.602 #48 NEW cov: 12451 ft: 14622 corp: 34/169b lim: 10 exec/s: 48 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:11:44.602 [2024-11-26 18:08:21.922474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.922496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.922545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.922556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.922602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.922612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.922659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000016f2 cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.922668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.602 #49 NEW cov: 12451 ft: 14635 corp: 35/177b lim: 10 exec/s: 49 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:11:44.602 [2024-11-26 18:08:21.962593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.962616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.962683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.962694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.962739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000006b4 cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.962749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.602 [2024-11-26 18:08:21.962799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fff2 cdw11:00000000 00:11:44.602 [2024-11-26 18:08:21.962808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.602 #50 NEW cov: 12451 ft: 14655 corp: 36/185b lim: 10 exec/s: 50 rss: 74Mb L: 8/10 MS: 1 ChangeByte- 00:11:44.603 [2024-11-26 18:08:22.002734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000006f6 cdw11:00000000 00:11:44.603 [2024-11-26 18:08:22.002757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.603 [2024-11-26 18:08:22.002821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000041ff cdw11:00000000 00:11:44.603 [2024-11-26 18:08:22.002832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.603 [2024-11-26 18:08:22.002879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.603 [2024-11-26 18:08:22.002889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.603 [2024-11-26 18:08:22.002935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:44.603 [2024-11-26 18:08:22.002945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.603 #51 NEW cov: 12451 ft: 14721 corp: 37/194b lim: 10 exec/s: 51 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:11:44.861 [2024-11-26 18:08:22.062973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000639 cdw11:00000000 00:11:44.861 [2024-11-26 18:08:22.062994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.862 [2024-11-26 18:08:22.063058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c092 cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.063068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.862 [2024-11-26 18:08:22.063117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007a4a cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.063127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.862 [2024-11-26 18:08:22.063172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004685 cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.063182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.862 [2024-11-26 18:08:22.063231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000000f6 cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.063241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:44.862 #52 NEW cov: 12451 ft: 14726 corp: 38/204b lim: 10 exec/s: 52 rss: 75Mb L: 10/10 MS: 1 PersAutoDict- DE: "9\300\222zJF\205\000"- 00:11:44.862 [2024-11-26 18:08:22.102580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.102605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.862 #53 NEW cov: 12451 ft: 14729 corp: 39/206b lim: 10 exec/s: 53 rss: 75Mb L: 2/10 MS: 1 ChangeBinInt- 00:11:44.862 [2024-11-26 18:08:22.152789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:11:44.862 [2024-11-26 18:08:22.152811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.862 #54 NEW cov: 12451 ft: 14736 corp: 40/209b lim: 10 exec/s: 27 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:11:44.862 #54 DONE cov: 12451 ft: 14736 corp: 40/209b lim: 10 exec/s: 27 rss: 75Mb 00:11:44.862 ###### Recommended dictionary. ###### 00:11:44.862 "9\300\222zJF\205\000" # Uses: 1 00:11:44.862 ###### End of recommended dictionary. ###### 00:11:44.862 Done 54 runs in 2 second(s) 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:44.862 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:45.120 18:08:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:11:45.120 [2024-11-26 18:08:22.333024] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:45.120 [2024-11-26 18:08:22.333083] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289344 ] 00:11:45.120 [2024-11-26 18:08:22.534831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.378 [2024-11-26 18:08:22.576221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.378 [2024-11-26 18:08:22.638613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.378 [2024-11-26 18:08:22.654807] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:11:45.378 INFO: Running with entropic power schedule (0xFF, 100). 00:11:45.378 INFO: Seed: 2536538312 00:11:45.378 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:45.379 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:45.379 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:11:45.379 INFO: A corpus is not provided, starting from an empty corpus 00:11:45.379 [2024-11-26 18:08:22.723221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.723268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.379 #2 INITED cov: 12251 ft: 12252 corp: 1/1b exec/s: 0 rss: 72Mb 00:11:45.379 [2024-11-26 18:08:22.784798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.784830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.379 [2024-11-26 18:08:22.784920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.784937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.379 [2024-11-26 18:08:22.785029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.785045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.379 [2024-11-26 18:08:22.785143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.785160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.379 [2024-11-26 18:08:22.785253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.379 [2024-11-26 18:08:22.785269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.637 #3 NEW cov: 12364 ft: 13789 corp: 2/6b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:11:45.637 [2024-11-26 18:08:22.874875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.637 [2024-11-26 18:08:22.874904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.874989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.875005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.875091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.875106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.875194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.875208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.875293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.875309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.638 #4 NEW cov: 12370 ft: 13940 corp: 3/11b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:11:45.638 [2024-11-26 18:08:22.965201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.965230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.965314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.965330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.965427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.965441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.965532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.965547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:22.965635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:22.965649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.638 #5 NEW cov: 12455 ft: 14176 corp: 4/16b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:11:45.638 [2024-11-26 18:08:23.055579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:23.055608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:23.055697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:23.055713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:23.055806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:23.055820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:23.055919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:23.055933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.638 [2024-11-26 18:08:23.056027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.638 [2024-11-26 18:08:23.056043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.896 #6 NEW cov: 12455 ft: 14271 corp: 5/21b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:11:45.896 [2024-11-26 18:08:23.115703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.115732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.115822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.115838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.115923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.115937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.116024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.116039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.116128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.116144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.896 #7 NEW cov: 12455 ft: 14382 corp: 6/26b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:11:45.896 [2024-11-26 18:08:23.206326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.206354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.206447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.206462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.206546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.206560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.206648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.206664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.896 [2024-11-26 18:08:23.206749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.896 [2024-11-26 18:08:23.206763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.896 #8 NEW cov: 12455 ft: 14514 corp: 7/31b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:11:45.896 [2024-11-26 18:08:23.296412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.897 [2024-11-26 18:08:23.296440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.897 [2024-11-26 18:08:23.296537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.897 [2024-11-26 18:08:23.296554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.897 [2024-11-26 18:08:23.296643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.897 [2024-11-26 18:08:23.296658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.897 [2024-11-26 18:08:23.296750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.897 [2024-11-26 18:08:23.296766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.897 [2024-11-26 18:08:23.296853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.897 [2024-11-26 18:08:23.296869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:45.897 #9 NEW cov: 12455 ft: 14537 corp: 8/36b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:11:46.155 [2024-11-26 18:08:23.356744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.356773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.356860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.356877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.356966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.356981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.357067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.357082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.357174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.357189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.155 #10 NEW cov: 12455 ft: 14584 corp: 9/41b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:11:46.155 [2024-11-26 18:08:23.447152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.447180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.447263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.447280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.447371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.447391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.447476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.447490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.447579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.447596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.155 #11 NEW cov: 12455 ft: 14696 corp: 10/46b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:11:46.155 [2024-11-26 18:08:23.507322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.155 [2024-11-26 18:08:23.507350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.155 [2024-11-26 18:08:23.507449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.507465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.507546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.507562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.507655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.507671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.507756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.507772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.156 #12 NEW cov: 12455 ft: 14761 corp: 11/51b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:11:46.156 [2024-11-26 18:08:23.597479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.597507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.597589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.597606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.597691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.597706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.156 [2024-11-26 18:08:23.597793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.156 [2024-11-26 18:08:23.597811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.414 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:46.414 #13 NEW cov: 12478 ft: 14912 corp: 12/55b lim: 5 exec/s: 13 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:11:46.414 [2024-11-26 18:08:23.828625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.414 [2024-11-26 18:08:23.828663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.414 [2024-11-26 18:08:23.828756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.414 [2024-11-26 18:08:23.828772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.414 [2024-11-26 18:08:23.828869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.414 [2024-11-26 18:08:23.828885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.414 [2024-11-26 18:08:23.828978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.414 [2024-11-26 18:08:23.828994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.414 [2024-11-26 18:08:23.829086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.414 [2024-11-26 18:08:23.829102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.414 #14 NEW cov: 12478 ft: 14973 corp: 13/60b lim: 5 exec/s: 14 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:11:46.672 [2024-11-26 18:08:23.888619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.672 [2024-11-26 18:08:23.888650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.672 [2024-11-26 18:08:23.888739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.672 [2024-11-26 18:08:23.888755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.672 [2024-11-26 18:08:23.888857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.672 [2024-11-26 18:08:23.888873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:23.888973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:23.888991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.673 #15 NEW cov: 12478 ft: 15072 corp: 14/64b lim: 5 exec/s: 15 rss: 74Mb L: 4/5 MS: 1 ShuffleBytes- 00:11:46.673 [2024-11-26 18:08:23.979011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:23.979045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:23.979138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:23.979156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:23.979251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:23.979268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:23.979359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:23.979377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.673 #16 NEW cov: 12478 ft: 15095 corp: 15/68b lim: 5 exec/s: 16 rss: 74Mb L: 4/5 MS: 1 ChangeBinInt- 00:11:46.673 [2024-11-26 18:08:24.039668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.039697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.039795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.039811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.039904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.039919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.040015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.040031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.040126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.040144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.673 #17 NEW cov: 12478 ft: 15167 corp: 16/73b lim: 5 exec/s: 17 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:11:46.673 [2024-11-26 18:08:24.100140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.100169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.100261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.100277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.100378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.100396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.100496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.100512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.673 [2024-11-26 18:08:24.100605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.673 [2024-11-26 18:08:24.100621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.931 #18 NEW cov: 12478 ft: 15205 corp: 17/78b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:11:46.931 [2024-11-26 18:08:24.160173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.160202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.160303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.160320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.160411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.160426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.160523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.160541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.931 #19 NEW cov: 12478 ft: 15262 corp: 18/82b lim: 5 exec/s: 19 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:11:46.931 [2024-11-26 18:08:24.220978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.221007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.221103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.221120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.221214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.221230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.221326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.221342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.221437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.221452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.931 #20 NEW cov: 12478 ft: 15284 corp: 19/87b lim: 5 exec/s: 20 rss: 74Mb L: 5/5 MS: 1 CMP- DE: "\000\000\000\361"- 00:11:46.931 [2024-11-26 18:08:24.311430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.311460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.311559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.311576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.311670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.311685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.931 [2024-11-26 18:08:24.311772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.931 [2024-11-26 18:08:24.311788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.932 [2024-11-26 18:08:24.311877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:46.932 [2024-11-26 18:08:24.311893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.932 #21 NEW cov: 12478 ft: 15298 corp: 20/92b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:11:47.190 [2024-11-26 18:08:24.401607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.401636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.401731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.401747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.401839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.401854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.401950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.401965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.190 #22 NEW cov: 12478 ft: 15364 corp: 21/96b lim: 5 exec/s: 22 rss: 75Mb L: 4/5 MS: 1 ChangeByte- 00:11:47.190 [2024-11-26 18:08:24.492346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.492377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.492473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.492493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.492584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.492600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.492697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.492712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.190 #23 NEW cov: 12478 ft: 15373 corp: 22/100b lim: 5 exec/s: 23 rss: 75Mb L: 4/5 MS: 1 CopyPart- 00:11:47.190 [2024-11-26 18:08:24.582968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.582997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.583093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.583111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.583204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.583218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.190 [2024-11-26 18:08:24.583308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.190 [2024-11-26 18:08:24.583324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.190 #24 NEW cov: 12478 ft: 15396 corp: 23/104b lim: 5 exec/s: 24 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:11:47.449 [2024-11-26 18:08:24.643267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.449 [2024-11-26 18:08:24.643295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.449 [2024-11-26 18:08:24.643390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.449 [2024-11-26 18:08:24.643407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.449 [2024-11-26 18:08:24.643504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.449 [2024-11-26 18:08:24.643519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.449 [2024-11-26 18:08:24.643612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.449 [2024-11-26 18:08:24.643629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.449 #25 NEW cov: 12478 ft: 15454 corp: 24/108b lim: 5 exec/s: 12 rss: 75Mb L: 4/5 MS: 1 ChangeBinInt- 00:11:47.449 #25 DONE cov: 12478 ft: 15454 corp: 24/108b lim: 5 exec/s: 12 rss: 75Mb 00:11:47.449 ###### Recommended dictionary. ###### 00:11:47.449 "\000\000\000\361" # Uses: 0 00:11:47.449 ###### End of recommended dictionary. ###### 00:11:47.449 Done 25 runs in 2 second(s) 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:47.449 18:08:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:11:47.449 [2024-11-26 18:08:24.859697] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:47.449 [2024-11-26 18:08:24.859758] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289860 ] 00:11:47.707 [2024-11-26 18:08:25.054271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.707 [2024-11-26 18:08:25.093970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.965 [2024-11-26 18:08:25.156332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.965 [2024-11-26 18:08:25.172512] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:11:47.965 INFO: Running with entropic power schedule (0xFF, 100). 00:11:47.965 INFO: Seed: 755578520 00:11:47.965 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:47.965 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:47.965 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:11:47.965 INFO: A corpus is not provided, starting from an empty corpus 00:11:47.965 [2024-11-26 18:08:25.220165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.220191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.965 #2 INITED cov: 12229 ft: 12215 corp: 1/1b exec/s: 0 rss: 72Mb 00:11:47.965 [2024-11-26 18:08:25.260175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.260201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.965 #3 NEW cov: 12364 ft: 12929 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 CopyPart- 00:11:47.965 [2024-11-26 18:08:25.321132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.321155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.965 [2024-11-26 18:08:25.321226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.321238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.965 [2024-11-26 18:08:25.321293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.321304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.965 [2024-11-26 18:08:25.321358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.321369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.965 [2024-11-26 18:08:25.321429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.321440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.965 #4 NEW cov: 12370 ft: 13898 corp: 3/7b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:11:47.965 [2024-11-26 18:08:25.380731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.380755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.965 [2024-11-26 18:08:25.380827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:47.965 [2024-11-26 18:08:25.380839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.223 #5 NEW cov: 12455 ft: 14307 corp: 4/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:11:48.223 [2024-11-26 18:08:25.440913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.440935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.441005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.441017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.223 #6 NEW cov: 12455 ft: 14400 corp: 5/11b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:11:48.223 [2024-11-26 18:08:25.501233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.501255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.501315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.501326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.501380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.501391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.223 #7 NEW cov: 12455 ft: 14626 corp: 6/14b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:11:48.223 [2024-11-26 18:08:25.541732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.541755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.541830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.541841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.541895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.541906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.223 [2024-11-26 18:08:25.541959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.223 [2024-11-26 18:08:25.541969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.224 [2024-11-26 18:08:25.542024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.542035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:48.224 #8 NEW cov: 12455 ft: 14722 corp: 7/19b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:11:48.224 [2024-11-26 18:08:25.581279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.581304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.224 [2024-11-26 18:08:25.581361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.581377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.224 #9 NEW cov: 12455 ft: 14760 corp: 8/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:11:48.224 [2024-11-26 18:08:25.621565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.621588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.224 [2024-11-26 18:08:25.621643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.621661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.224 [2024-11-26 18:08:25.621716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.224 [2024-11-26 18:08:25.621727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.224 #10 NEW cov: 12455 ft: 14805 corp: 9/24b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:11:48.482 [2024-11-26 18:08:25.681360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.681388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.482 #11 NEW cov: 12455 ft: 14928 corp: 10/25b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:11:48.482 [2024-11-26 18:08:25.721477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.721500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.482 #12 NEW cov: 12455 ft: 15007 corp: 11/26b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:11:48.482 [2024-11-26 18:08:25.781701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.781724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.482 #13 NEW cov: 12455 ft: 15063 corp: 12/27b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:11:48.482 [2024-11-26 18:08:25.841806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.841827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.482 #14 NEW cov: 12455 ft: 15071 corp: 13/28b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:11:48.482 [2024-11-26 18:08:25.902575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.902597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.482 [2024-11-26 18:08:25.902667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.902678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.482 [2024-11-26 18:08:25.902733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.902744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.482 [2024-11-26 18:08:25.902797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.482 [2024-11-26 18:08:25.902808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.741 #15 NEW cov: 12455 ft: 15088 corp: 14/32b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:11:48.741 [2024-11-26 18:08:25.962936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:25.962961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.741 [2024-11-26 18:08:25.963083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:25.963093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.741 [2024-11-26 18:08:25.963108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:25.963116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.741 [2024-11-26 18:08:25.963129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:25.963137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.741 [2024-11-26 18:08:25.963150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:25.963157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:48.741 #16 NEW cov: 12464 ft: 15128 corp: 15/37b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ShuffleBytes- 00:11:48.741 [2024-11-26 18:08:26.002441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:26.002462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.741 [2024-11-26 18:08:26.002537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:26.002548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.741 #17 NEW cov: 12464 ft: 15174 corp: 16/39b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:11:48.741 [2024-11-26 18:08:26.042363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:26.042389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.741 #18 NEW cov: 12464 ft: 15176 corp: 17/40b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:11:48.741 [2024-11-26 18:08:26.082504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:48.741 [2024-11-26 18:08:26.082528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.001 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:49.001 #19 NEW cov: 12487 ft: 15215 corp: 18/41b lim: 5 exec/s: 19 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:11:49.001 [2024-11-26 18:08:26.233748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.233777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.233853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.233867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.233925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.233936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.233994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.234005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.234061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.234072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:49.001 #20 NEW cov: 12487 ft: 15267 corp: 19/46b lim: 5 exec/s: 20 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:11:49.001 [2024-11-26 18:08:26.293844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.293867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.293940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.293953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.294010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.294021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.294078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.294089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.294145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.294156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:49.001 #21 NEW cov: 12487 ft: 15297 corp: 20/51b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:11:49.001 [2024-11-26 18:08:26.353462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.353484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.353542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.353553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.001 #22 NEW cov: 12487 ft: 15320 corp: 21/53b lim: 5 exec/s: 22 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:11:49.001 [2024-11-26 18:08:26.393758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.393782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.393858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.001 [2024-11-26 18:08:26.393869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.001 [2024-11-26 18:08:26.393928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.002 [2024-11-26 18:08:26.393939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.002 #23 NEW cov: 12487 ft: 15335 corp: 22/56b lim: 5 exec/s: 23 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:11:49.002 [2024-11-26 18:08:26.433684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.002 [2024-11-26 18:08:26.433706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.002 [2024-11-26 18:08:26.433764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.002 [2024-11-26 18:08:26.433775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.261 #24 NEW cov: 12487 ft: 15363 corp: 23/58b lim: 5 exec/s: 24 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:11:49.261 [2024-11-26 18:08:26.473621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.473643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.261 #25 NEW cov: 12487 ft: 15370 corp: 24/59b lim: 5 exec/s: 25 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:11:49.261 [2024-11-26 18:08:26.514132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.514154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.514230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.514241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.514299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.514310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.261 #26 NEW cov: 12487 ft: 15387 corp: 25/62b lim: 5 exec/s: 26 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:11:49.261 [2024-11-26 18:08:26.554197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.554218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.554293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.554305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.554365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.554380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.261 #27 NEW cov: 12487 ft: 15403 corp: 26/65b lim: 5 exec/s: 27 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:11:49.261 [2024-11-26 18:08:26.614219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.614242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.614320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.614331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.261 #28 NEW cov: 12487 ft: 15408 corp: 27/67b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:11:49.261 [2024-11-26 18:08:26.654309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.261 [2024-11-26 18:08:26.654330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.261 [2024-11-26 18:08:26.654408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.262 [2024-11-26 18:08:26.654420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.262 #29 NEW cov: 12487 ft: 15410 corp: 28/69b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:11:49.521 [2024-11-26 18:08:26.714531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.714554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.714627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.714639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 #30 NEW cov: 12487 ft: 15487 corp: 29/71b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:11:49.521 [2024-11-26 18:08:26.754786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.754808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.754866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.754877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.754935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.754945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.521 #31 NEW cov: 12487 ft: 15496 corp: 30/74b lim: 5 exec/s: 31 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:11:49.521 [2024-11-26 18:08:26.794906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.794935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.794994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.795005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.795063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.795074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.521 #32 NEW cov: 12487 ft: 15568 corp: 31/77b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:11:49.521 [2024-11-26 18:08:26.834828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.834851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.834924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.834935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 #33 NEW cov: 12487 ft: 15596 corp: 32/79b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:11:49.521 [2024-11-26 18:08:26.895224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.895246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.895304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.895315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.895378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.895389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.521 #34 NEW cov: 12487 ft: 15612 corp: 33/82b lim: 5 exec/s: 34 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:11:49.521 [2024-11-26 18:08:26.935140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.935162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.521 [2024-11-26 18:08:26.935222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.521 [2024-11-26 18:08:26.935233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.521 #35 NEW cov: 12487 ft: 15619 corp: 34/84b lim: 5 exec/s: 35 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:11:49.781 [2024-11-26 18:08:26.975140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:26.975162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.782 #36 NEW cov: 12487 ft: 15636 corp: 35/85b lim: 5 exec/s: 36 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:11:49.782 [2024-11-26 18:08:27.035696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.035719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.035779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.035790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.035843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.035855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.782 #37 NEW cov: 12487 ft: 15668 corp: 36/88b lim: 5 exec/s: 37 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:11:49.782 [2024-11-26 18:08:27.095700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.095726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.095785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.095797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.782 #38 NEW cov: 12487 ft: 15676 corp: 37/90b lim: 5 exec/s: 38 rss: 75Mb L: 2/5 MS: 1 CrossOver- 00:11:49.782 [2024-11-26 18:08:27.135801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.135825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.135884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.135895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.782 #39 NEW cov: 12487 ft: 15696 corp: 38/92b lim: 5 exec/s: 39 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:11:49.782 [2024-11-26 18:08:27.176103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.176125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.176201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.176212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.782 [2024-11-26 18:08:27.176271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:49.782 [2024-11-26 18:08:27.176282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.782 #40 NEW cov: 12487 ft: 15708 corp: 39/95b lim: 5 exec/s: 20 rss: 75Mb L: 3/5 MS: 1 ShuffleBytes- 00:11:49.782 #40 DONE cov: 12487 ft: 15708 corp: 39/95b lim: 5 exec/s: 20 rss: 75Mb 00:11:49.782 Done 40 runs in 2 second(s) 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:50.041 18:08:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:11:50.041 [2024-11-26 18:08:27.382981] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:50.041 [2024-11-26 18:08:27.383056] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290375 ] 00:11:50.301 [2024-11-26 18:08:27.592961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.301 [2024-11-26 18:08:27.632506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.301 [2024-11-26 18:08:27.694860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.301 [2024-11-26 18:08:27.711039] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:11:50.301 INFO: Running with entropic power schedule (0xFF, 100). 00:11:50.301 INFO: Seed: 3295559329 00:11:50.301 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:50.301 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:50.301 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:11:50.301 INFO: A corpus is not provided, starting from an empty corpus 00:11:50.301 #2 INITED exec/s: 0 rss: 65Mb 00:11:50.301 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:50.301 This may also happen if the target rejected all inputs we tried so far 00:11:50.560 [2024-11-26 18:08:27.750246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.560 [2024-11-26 18:08:27.750272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.560 [2024-11-26 18:08:27.750339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.560 [2024-11-26 18:08:27.750351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.560 NEW_FUNC[1/716]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:11:50.560 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:50.560 #7 NEW cov: 12273 ft: 12273 corp: 2/23b lim: 40 exec/s: 0 rss: 72Mb L: 22/22 MS: 5 CrossOver-ChangeByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:11:50.560 [2024-11-26 18:08:27.900595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.560 [2024-11-26 18:08:27.900623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.560 [2024-11-26 18:08:27.900685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.560 [2024-11-26 18:08:27.900696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.561 #13 NEW cov: 12387 ft: 12696 corp: 3/45b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeASCIIInt- 00:11:50.561 [2024-11-26 18:08:27.960692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:27.960714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.561 [2024-11-26 18:08:27.960789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:27.960800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.561 #14 NEW cov: 12393 ft: 13033 corp: 4/67b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeASCIIInt- 00:11:50.561 [2024-11-26 18:08:28.001088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:28.001110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.561 [2024-11-26 18:08:28.001171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:28.001183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.561 [2024-11-26 18:08:28.001243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:28.001254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.561 [2024-11-26 18:08:28.001312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.561 [2024-11-26 18:08:28.001322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.820 #15 NEW cov: 12478 ft: 13744 corp: 5/105b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 CopyPart- 00:11:50.820 [2024-11-26 18:08:28.060968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.060993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.061067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000f800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.061078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.820 #16 NEW cov: 12478 ft: 13999 corp: 6/127b lim: 40 exec/s: 0 rss: 73Mb L: 22/38 MS: 1 ChangeBinInt- 00:11:50.820 [2024-11-26 18:08:28.100929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.100951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.820 #17 NEW cov: 12478 ft: 14342 corp: 7/142b lim: 40 exec/s: 0 rss: 73Mb L: 15/38 MS: 1 EraseBytes- 00:11:50.820 [2024-11-26 18:08:28.161143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.161166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.820 #18 NEW cov: 12478 ft: 14538 corp: 8/152b lim: 40 exec/s: 0 rss: 73Mb L: 10/38 MS: 1 EraseBytes- 00:11:50.820 [2024-11-26 18:08:28.221760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.221782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.221844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.221855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.221932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f6f6f6f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.221943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.222000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f6f6f6f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.222010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.820 #24 NEW cov: 12478 ft: 14595 corp: 9/191b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:11:50.820 [2024-11-26 18:08:28.261842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.261866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.261943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.261956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.820 [2024-11-26 18:08:28.262016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f6f6f6f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.820 [2024-11-26 18:08:28.262028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.821 [2024-11-26 18:08:28.262090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f6f6f6f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.821 [2024-11-26 18:08:28.262101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.080 #25 NEW cov: 12478 ft: 14656 corp: 10/230b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 ChangeASCIIInt- 00:11:51.080 [2024-11-26 18:08:28.321571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000c90a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.080 [2024-11-26 18:08:28.321595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.080 #26 NEW cov: 12478 ft: 14693 corp: 11/240b lim: 40 exec/s: 0 rss: 73Mb L: 10/39 MS: 1 ChangeBinInt- 00:11:51.080 [2024-11-26 18:08:28.382186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.080 [2024-11-26 18:08:28.382208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.080 [2024-11-26 18:08:28.382270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.080 [2024-11-26 18:08:28.382281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.080 [2024-11-26 18:08:28.382340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f6f6f6f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.080 [2024-11-26 18:08:28.382352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.080 [2024-11-26 18:08:28.382409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f6f6f6f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.382420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.081 #27 NEW cov: 12478 ft: 14773 corp: 12/279b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 ChangeBit- 00:11:51.081 [2024-11-26 18:08:28.442075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.442098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.081 [2024-11-26 18:08:28.442156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f7fff800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.442168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.081 #28 NEW cov: 12478 ft: 14821 corp: 13/301b lim: 40 exec/s: 0 rss: 73Mb L: 22/39 MS: 1 ChangeBinInt- 00:11:51.081 [2024-11-26 18:08:28.482463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000185 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.482487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.081 [2024-11-26 18:08:28.482548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:464eb057 cdw11:42b20000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.482559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.081 [2024-11-26 18:08:28.482632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.482648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.081 [2024-11-26 18:08:28.482705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.081 [2024-11-26 18:08:28.482716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.081 #29 NEW cov: 12478 ft: 14855 corp: 14/339b lim: 40 exec/s: 0 rss: 73Mb L: 38/39 MS: 1 CMP- DE: "\001\205FN\260WB\262"- 00:11:51.340 [2024-11-26 18:08:28.542196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000185 cdw11:464eb057 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.340 [2024-11-26 18:08:28.542219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.340 #35 NEW cov: 12478 ft: 14886 corp: 15/349b lim: 40 exec/s: 0 rss: 74Mb L: 10/39 MS: 1 PersAutoDict- DE: "\001\205FN\260WB\262"- 00:11:51.340 [2024-11-26 18:08:28.582934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.340 [2024-11-26 18:08:28.582956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.340 [2024-11-26 18:08:28.583016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.340 [2024-11-26 18:08:28.583027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.340 [2024-11-26 18:08:28.583086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.340 [2024-11-26 18:08:28.583097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.340 [2024-11-26 18:08:28.583157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.583167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.583225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:300a3f53 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.583236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:51.341 #36 NEW cov: 12478 ft: 14956 corp: 16/389b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:11:51.341 [2024-11-26 18:08:28.622393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000c90a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.622416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.341 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:51.341 #37 NEW cov: 12501 ft: 14999 corp: 17/400b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 InsertByte- 00:11:51.341 [2024-11-26 18:08:28.683050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000185 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.683075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.683150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:46000030 cdw11:42b20000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.683166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.683226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.683237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.683294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.683306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.341 #38 NEW cov: 12501 ft: 15013 corp: 18/438b lim: 40 exec/s: 0 rss: 74Mb L: 38/40 MS: 1 CrossOver- 00:11:51.341 [2024-11-26 18:08:28.743092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.743114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.743176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.743187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.341 [2024-11-26 18:08:28.743262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f7fff800 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.341 [2024-11-26 18:08:28.743272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.341 #39 NEW cov: 12501 ft: 15215 corp: 19/464b lim: 40 exec/s: 39 rss: 74Mb L: 26/40 MS: 1 CMP- DE: "\024\000\000\000"- 00:11:51.601 [2024-11-26 18:08:28.803210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:009128df cdw11:6c4f4685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.803233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.803295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.803305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.803364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.803378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.601 #40 NEW cov: 12501 ft: 15231 corp: 20/494b lim: 40 exec/s: 40 rss: 74Mb L: 30/40 MS: 1 CMP- DE: "\221(\337lOF\205\000"- 00:11:51.601 [2024-11-26 18:08:28.843161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.843183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.843244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3f530000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.843255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.601 #41 NEW cov: 12501 ft: 15236 corp: 21/512b lim: 40 exec/s: 41 rss: 74Mb L: 18/40 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\004"- 00:11:51.601 [2024-11-26 18:08:28.883255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.883278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.883354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3f530000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.883365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.601 #42 NEW cov: 12501 ft: 15252 corp: 22/530b lim: 40 exec/s: 42 rss: 74Mb L: 18/40 MS: 1 ShuffleBytes- 00:11:51.601 [2024-11-26 18:08:28.943814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.943836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.943898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000004f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.943909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.943967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f6f6f6f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.943977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:28.944032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f6f6f6f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:28.944043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.601 #43 NEW cov: 12501 ft: 15271 corp: 23/569b lim: 40 exec/s: 43 rss: 74Mb L: 39/40 MS: 1 ChangeBit- 00:11:51.601 [2024-11-26 18:08:29.003637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:29.003658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.601 [2024-11-26 18:08:29.003735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.601 [2024-11-26 18:08:29.003746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.601 #44 NEW cov: 12501 ft: 15283 corp: 24/591b lim: 40 exec/s: 44 rss: 74Mb L: 22/40 MS: 1 ChangeASCIIInt- 00:11:51.872 [2024-11-26 18:08:29.063700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00001400 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.063722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.872 #45 NEW cov: 12501 ft: 15318 corp: 25/601b lim: 40 exec/s: 45 rss: 74Mb L: 10/40 MS: 1 PersAutoDict- DE: "\024\000\000\000"- 00:11:51.872 [2024-11-26 18:08:29.104240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.104262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.104321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.104335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.104395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f6f6f6f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.104407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.104481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:f6f6f6f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.104491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.872 #46 NEW cov: 12501 ft: 15325 corp: 26/640b lim: 40 exec/s: 46 rss: 74Mb L: 39/40 MS: 1 ChangeASCIIInt- 00:11:51.872 [2024-11-26 18:08:29.144044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.144067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.144126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.144137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.872 #47 NEW cov: 12501 ft: 15333 corp: 27/661b lim: 40 exec/s: 47 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:11:51.872 [2024-11-26 18:08:29.184150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.184173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.184234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3f530000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.184245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.872 #48 NEW cov: 12501 ft: 15343 corp: 28/679b lim: 40 exec/s: 48 rss: 74Mb L: 18/40 MS: 1 CrossOver- 00:11:51.872 [2024-11-26 18:08:29.244640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000185 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.244661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.244721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:14000000 cdw11:42b20000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.244732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.244791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.244801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:51.872 [2024-11-26 18:08:29.244859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.244869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:51.872 #49 NEW cov: 12501 ft: 15362 corp: 29/717b lim: 40 exec/s: 49 rss: 74Mb L: 38/40 MS: 1 PersAutoDict- DE: "\024\000\000\000"- 00:11:51.872 [2024-11-26 18:08:29.304376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.872 [2024-11-26 18:08:29.304398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.132 #50 NEW cov: 12501 ft: 15374 corp: 30/732b lim: 40 exec/s: 50 rss: 75Mb L: 15/40 MS: 1 ShuffleBytes- 00:11:52.132 [2024-11-26 18:08:29.344622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.344643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.344719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00d400f8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.344731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.132 #51 NEW cov: 12501 ft: 15380 corp: 31/755b lim: 40 exec/s: 51 rss: 75Mb L: 23/40 MS: 1 InsertByte- 00:11:52.132 [2024-11-26 18:08:29.385215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.385236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.385297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00009e00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.385307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.385364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.385378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.385440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.385450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.385505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:300a3f53 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.385515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:52.132 #52 NEW cov: 12501 ft: 15395 corp: 32/795b lim: 40 exec/s: 52 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:11:52.132 [2024-11-26 18:08:29.445125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:001128df cdw11:6c4f4685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.445147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.445206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.445218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.132 [2024-11-26 18:08:29.445276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.445290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:52.132 #58 NEW cov: 12501 ft: 15420 corp: 33/825b lim: 40 exec/s: 58 rss: 75Mb L: 30/40 MS: 1 ChangeBit- 00:11:52.132 [2024-11-26 18:08:29.505308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:df6c4f46 cdw11:85000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.132 [2024-11-26 18:08:29.505330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.133 [2024-11-26 18:08:29.505391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.133 [2024-11-26 18:08:29.505402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.133 [2024-11-26 18:08:29.505479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00370a3f cdw11:53000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.133 [2024-11-26 18:08:29.505490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:52.133 #59 NEW cov: 12501 ft: 15434 corp: 34/854b lim: 40 exec/s: 59 rss: 75Mb L: 29/40 MS: 1 CrossOver- 00:11:52.133 [2024-11-26 18:08:29.545339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:009128df cdw11:6c4f4685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.133 [2024-11-26 18:08:29.545361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.133 [2024-11-26 18:08:29.545426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.133 [2024-11-26 18:08:29.545437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.133 [2024-11-26 18:08:29.545496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.133 [2024-11-26 18:08:29.545507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:52.133 #60 NEW cov: 12501 ft: 15445 corp: 35/884b lim: 40 exec/s: 60 rss: 75Mb L: 30/40 MS: 1 CopyPart- 00:11:52.392 [2024-11-26 18:08:29.585136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:25000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.585157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.392 #64 NEW cov: 12501 ft: 15465 corp: 36/893b lim: 40 exec/s: 64 rss: 75Mb L: 9/40 MS: 4 ChangeByte-ChangeByte-ChangeByte-PersAutoDict- DE: "\000\000\000\000\000\000\000\004"- 00:11:52.392 [2024-11-26 18:08:29.625435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.625456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.392 [2024-11-26 18:08:29.625520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.625531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.392 #65 NEW cov: 12501 ft: 15478 corp: 37/915b lim: 40 exec/s: 65 rss: 75Mb L: 22/40 MS: 1 ChangeByte- 00:11:52.392 [2024-11-26 18:08:29.665554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00018546 cdw11:4eb05742 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.665578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.392 [2024-11-26 18:08:29.665640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b20000f6 cdw11:f6f6f6f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.665651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.392 #66 NEW cov: 12501 ft: 15491 corp: 38/936b lim: 40 exec/s: 66 rss: 75Mb L: 21/40 MS: 1 PersAutoDict- DE: "\001\205FN\260WB\262"- 00:11:52.392 [2024-11-26 18:08:29.726054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00910000 cdw11:000028df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.726077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.392 [2024-11-26 18:08:29.726139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6c4f4685 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.726150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.392 [2024-11-26 18:08:29.726211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.726223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:52.392 [2024-11-26 18:08:29.726281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000370a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:52.392 [2024-11-26 18:08:29.726291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:52.392 #67 NEW cov: 12501 ft: 15498 corp: 39/970b lim: 40 exec/s: 33 rss: 75Mb L: 34/40 MS: 1 CopyPart- 00:11:52.392 #67 DONE cov: 12501 ft: 15498 corp: 39/970b lim: 40 exec/s: 33 rss: 75Mb 00:11:52.392 ###### Recommended dictionary. ###### 00:11:52.392 "\001\205FN\260WB\262" # Uses: 2 00:11:52.392 "\024\000\000\000" # Uses: 2 00:11:52.392 "\221(\337lOF\205\000" # Uses: 0 00:11:52.392 "\000\000\000\000\000\000\000\004" # Uses: 1 00:11:52.392 ###### End of recommended dictionary. ###### 00:11:52.393 Done 67 runs in 2 second(s) 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:52.652 18:08:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:11:52.652 [2024-11-26 18:08:29.918011] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:52.652 [2024-11-26 18:08:29.918091] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290757 ] 00:11:52.911 [2024-11-26 18:08:30.146607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.911 [2024-11-26 18:08:30.192627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.911 [2024-11-26 18:08:30.255022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.911 [2024-11-26 18:08:30.271200] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:11:52.911 INFO: Running with entropic power schedule (0xFF, 100). 00:11:52.911 INFO: Seed: 1562594172 00:11:52.911 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:52.912 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:52.912 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:11:52.912 INFO: A corpus is not provided, starting from an empty corpus 00:11:52.912 #2 INITED exec/s: 0 rss: 65Mb 00:11:52.912 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:52.912 This may also happen if the target rejected all inputs we tried so far 00:11:52.912 [2024-11-26 18:08:30.317066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:52.912 [2024-11-26 18:08:30.317092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:52.912 [2024-11-26 18:08:30.317153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:52.912 [2024-11-26 18:08:30.317164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.171 NEW_FUNC[1/717]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:11:53.171 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:53.171 #13 NEW cov: 12286 ft: 12285 corp: 2/24b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:11:53.171 [2024-11-26 18:08:30.467316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.467344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.171 [2024-11-26 18:08:30.467405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.467417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.171 #14 NEW cov: 12399 ft: 12867 corp: 3/47b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBit- 00:11:53.171 [2024-11-26 18:08:30.527639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.527666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.171 [2024-11-26 18:08:30.527724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.527735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.171 [2024-11-26 18:08:30.527792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.527803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.171 #15 NEW cov: 12405 ft: 13288 corp: 4/75b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 CrossOver- 00:11:53.171 [2024-11-26 18:08:30.587780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.587805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.171 [2024-11-26 18:08:30.587863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.587875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.171 [2024-11-26 18:08:30.587932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.171 [2024-11-26 18:08:30.587943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.430 #16 NEW cov: 12490 ft: 13525 corp: 5/103b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 ChangeBinInt- 00:11:53.430 [2024-11-26 18:08:30.648101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.430 [2024-11-26 18:08:30.648123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.648181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.648193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.648249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffa1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.648260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.648317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:a1a1a1a1 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.648327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:53.431 #17 NEW cov: 12490 ft: 13917 corp: 6/136b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:11:53.431 [2024-11-26 18:08:30.687845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.687867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.687925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.687938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.431 #18 NEW cov: 12490 ft: 14022 corp: 7/156b lim: 40 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 EraseBytes- 00:11:53.431 [2024-11-26 18:08:30.747776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.747797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.431 #19 NEW cov: 12490 ft: 14759 corp: 8/164b lim: 40 exec/s: 0 rss: 73Mb L: 8/33 MS: 1 CrossOver- 00:11:53.431 [2024-11-26 18:08:30.788539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.788561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.788619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.788630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.788685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.788695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.431 [2024-11-26 18:08:30.788750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.788760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:53.431 #20 NEW cov: 12490 ft: 14817 corp: 9/203b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CrossOver- 00:11:53.431 [2024-11-26 18:08:30.828090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.431 [2024-11-26 18:08:30.828111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.431 #21 NEW cov: 12490 ft: 14889 corp: 10/216b lim: 40 exec/s: 0 rss: 73Mb L: 13/39 MS: 1 EraseBytes- 00:11:53.690 [2024-11-26 18:08:30.888680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.888702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.690 [2024-11-26 18:08:30.888757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.888769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.690 [2024-11-26 18:08:30.888823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.888834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.690 #22 NEW cov: 12490 ft: 14939 corp: 11/244b lim: 40 exec/s: 0 rss: 73Mb L: 28/39 MS: 1 ShuffleBytes- 00:11:53.690 [2024-11-26 18:08:30.928740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffff2aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.928762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.690 [2024-11-26 18:08:30.928836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.928848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.690 [2024-11-26 18:08:30.928904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.690 [2024-11-26 18:08:30.928915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.690 #23 NEW cov: 12490 ft: 15000 corp: 12/272b lim: 40 exec/s: 0 rss: 73Mb L: 28/39 MS: 1 ChangeByte- 00:11:53.691 [2024-11-26 18:08:30.988911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:30.988932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:30.988989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:30.989000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:30.989055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:30.989065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.691 #24 NEW cov: 12490 ft: 15030 corp: 13/300b lim: 40 exec/s: 0 rss: 73Mb L: 28/39 MS: 1 ChangeByte- 00:11:53.691 [2024-11-26 18:08:31.029362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ef686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.029389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.029445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.029457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.029511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.029538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.029594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffa1a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.029604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.029660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:a1a1a1ff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.029672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:53.691 #25 NEW cov: 12490 ft: 15115 corp: 14/340b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:53.691 [2024-11-26 18:08:31.069357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.069390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.069450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.069460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.069517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.069528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.069583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f7ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.069593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:53.691 #26 NEW cov: 12490 ft: 15197 corp: 15/379b lim: 40 exec/s: 0 rss: 73Mb L: 39/40 MS: 1 ChangeBit- 00:11:53.691 [2024-11-26 18:08:31.129323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.129344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.129409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.129420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.691 [2024-11-26 18:08:31.129473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff7fffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.691 [2024-11-26 18:08:31.129483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.950 #27 NEW cov: 12490 ft: 15248 corp: 16/407b lim: 40 exec/s: 0 rss: 73Mb L: 28/40 MS: 1 ChangeBit- 00:11:53.950 [2024-11-26 18:08:31.169629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.950 [2024-11-26 18:08:31.169653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.950 [2024-11-26 18:08:31.169727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.950 [2024-11-26 18:08:31.169738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.950 [2024-11-26 18:08:31.169794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.950 [2024-11-26 18:08:31.169804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.950 [2024-11-26 18:08:31.169861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:f7ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.169872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:53.951 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:53.951 #28 NEW cov: 12513 ft: 15284 corp: 17/446b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 ChangeByte- 00:11:53.951 [2024-11-26 18:08:31.229463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.229490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.951 [2024-11-26 18:08:31.229546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.229557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.951 #29 NEW cov: 12513 ft: 15297 corp: 18/467b lim: 40 exec/s: 0 rss: 74Mb L: 21/40 MS: 1 EraseBytes- 00:11:53.951 [2024-11-26 18:08:31.289805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:40ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.289827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.951 [2024-11-26 18:08:31.289885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.289896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.951 [2024-11-26 18:08:31.289952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.289979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.951 #30 NEW cov: 12513 ft: 15345 corp: 19/491b lim: 40 exec/s: 30 rss: 74Mb L: 24/40 MS: 1 InsertByte- 00:11:53.951 [2024-11-26 18:08:31.329717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ef000000 cdw11:17ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.329739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.951 [2024-11-26 18:08:31.329795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.329806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.951 #31 NEW cov: 12513 ft: 15369 corp: 20/514b lim: 40 exec/s: 31 rss: 74Mb L: 23/40 MS: 1 ChangeBinInt- 00:11:53.951 [2024-11-26 18:08:31.369658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.951 [2024-11-26 18:08:31.369681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 #32 NEW cov: 12513 ft: 15378 corp: 21/527b lim: 40 exec/s: 32 rss: 74Mb L: 13/40 MS: 1 CopyPart- 00:11:54.210 [2024-11-26 18:08:31.430558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.430581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.430639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.430650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.430708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.430718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.430777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.430788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.430844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.430854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.210 #33 NEW cov: 12513 ft: 15385 corp: 22/567b lim: 40 exec/s: 33 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:54.210 [2024-11-26 18:08:31.470145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:59000000 cdw11:17ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.470168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.470226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.470237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.210 #34 NEW cov: 12513 ft: 15395 corp: 23/590b lim: 40 exec/s: 34 rss: 74Mb L: 23/40 MS: 1 ChangeByte- 00:11:54.210 [2024-11-26 18:08:31.530284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.530306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.530362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.530379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.210 #35 NEW cov: 12513 ft: 15445 corp: 24/613b lim: 40 exec/s: 35 rss: 74Mb L: 23/40 MS: 1 ShuffleBytes- 00:11:54.210 [2024-11-26 18:08:31.570224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ef00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.570247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 #41 NEW cov: 12513 ft: 15465 corp: 25/626b lim: 40 exec/s: 41 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:11:54.210 [2024-11-26 18:08:31.610490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.610512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.610571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.610582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.210 #42 NEW cov: 12513 ft: 15472 corp: 26/649b lim: 40 exec/s: 42 rss: 74Mb L: 23/40 MS: 1 CopyPart- 00:11:54.210 [2024-11-26 18:08:31.650773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.650795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.650850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.650864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.210 [2024-11-26 18:08:31.650919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:24ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.210 [2024-11-26 18:08:31.650930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.469 #43 NEW cov: 12513 ft: 15482 corp: 27/677b lim: 40 exec/s: 43 rss: 74Mb L: 28/40 MS: 1 ChangeByte- 00:11:54.469 [2024-11-26 18:08:31.691301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.691323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.469 [2024-11-26 18:08:31.691387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.691399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.469 [2024-11-26 18:08:31.691454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.691465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.469 [2024-11-26 18:08:31.691517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.691527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.469 [2024-11-26 18:08:31.691584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:24ffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.691594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.469 #44 NEW cov: 12513 ft: 15487 corp: 28/717b lim: 40 exec/s: 44 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:11:54.469 [2024-11-26 18:08:31.750905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffff0600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.469 [2024-11-26 18:08:31.750927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.469 [2024-11-26 18:08:31.750985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.750996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.470 #45 NEW cov: 12513 ft: 15498 corp: 29/737b lim: 40 exec/s: 45 rss: 74Mb L: 20/40 MS: 1 ChangeBinInt- 00:11:54.470 [2024-11-26 18:08:31.791037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1cffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.791059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.470 [2024-11-26 18:08:31.791117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.791128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.470 #46 NEW cov: 12513 ft: 15510 corp: 30/754b lim: 40 exec/s: 46 rss: 74Mb L: 17/40 MS: 1 EraseBytes- 00:11:54.470 [2024-11-26 18:08:31.831539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c00ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.831561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.470 [2024-11-26 18:08:31.831618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.831629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.470 [2024-11-26 18:08:31.831685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.831712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.470 [2024-11-26 18:08:31.831768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.831778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.470 #47 NEW cov: 12513 ft: 15526 corp: 31/788b lim: 40 exec/s: 47 rss: 74Mb L: 34/40 MS: 1 CopyPart- 00:11:54.470 [2024-11-26 18:08:31.891139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8b8b8b8b cdw11:8b8befff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.470 [2024-11-26 18:08:31.891161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 #48 NEW cov: 12513 ft: 15529 corp: 32/802b lim: 40 exec/s: 48 rss: 74Mb L: 14/40 MS: 1 InsertRepeatedBytes- 00:11:54.729 [2024-11-26 18:08:31.951486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffff11 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.951507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:31.951564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.951574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.729 #49 NEW cov: 12513 ft: 15541 corp: 33/825b lim: 40 exec/s: 49 rss: 74Mb L: 23/40 MS: 1 CMP- DE: "\021\000\000\000\000\000\000\000"- 00:11:54.729 [2024-11-26 18:08:31.992005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.992027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:31.992084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.992095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:31.992149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.992159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:31.992219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:31.992229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.729 #50 NEW cov: 12513 ft: 15593 corp: 34/859b lim: 40 exec/s: 50 rss: 75Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:11:54.729 [2024-11-26 18:08:32.031521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ebffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.031542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 #51 NEW cov: 12513 ft: 15596 corp: 35/872b lim: 40 exec/s: 51 rss: 75Mb L: 13/40 MS: 1 ChangeBit- 00:11:54.729 [2024-11-26 18:08:32.072209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.072230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.072287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff1100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.072298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.072354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.072364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.072424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.072435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.729 #52 NEW cov: 12513 ft: 15600 corp: 36/908b lim: 40 exec/s: 52 rss: 75Mb L: 36/40 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:11:54.729 [2024-11-26 18:08:32.112179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:efffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.112202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.112277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.112289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.112346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff7fffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.112358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.729 #53 NEW cov: 12513 ft: 15645 corp: 37/936b lim: 40 exec/s: 53 rss: 75Mb L: 28/40 MS: 1 ShuffleBytes- 00:11:54.729 [2024-11-26 18:08:32.172061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:8b8b8b8b cdw11:8b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.172083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.729 [2024-11-26 18:08:32.172140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:008befff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.729 [2024-11-26 18:08:32.172151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.989 #54 NEW cov: 12513 ft: 15652 corp: 38/954b lim: 40 exec/s: 54 rss: 76Mb L: 18/40 MS: 1 InsertRepeatedBytes- 00:11:54.989 [2024-11-26 18:08:32.232115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ebffffff cdw11:11000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.989 [2024-11-26 18:08:32.232141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.989 #55 NEW cov: 12513 ft: 15678 corp: 39/967b lim: 40 exec/s: 55 rss: 76Mb L: 13/40 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:11:54.989 [2024-11-26 18:08:32.292667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.989 [2024-11-26 18:08:32.292690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.989 [2024-11-26 18:08:32.292746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0007ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.989 [2024-11-26 18:08:32.292757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.989 [2024-11-26 18:08:32.292812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.989 [2024-11-26 18:08:32.292822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.989 #56 NEW cov: 12513 ft: 15683 corp: 40/995b lim: 40 exec/s: 28 rss: 76Mb L: 28/40 MS: 1 ChangeBinInt- 00:11:54.989 #56 DONE cov: 12513 ft: 15683 corp: 40/995b lim: 40 exec/s: 28 rss: 76Mb 00:11:54.989 ###### Recommended dictionary. ###### 00:11:54.989 "\021\000\000\000\000\000\000\000" # Uses: 2 00:11:54.989 ###### End of recommended dictionary. ###### 00:11:54.989 Done 56 runs in 2 second(s) 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:55.249 18:08:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:11:55.249 [2024-11-26 18:08:32.481159] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:55.249 [2024-11-26 18:08:32.481234] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291173 ] 00:11:55.508 [2024-11-26 18:08:32.701166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.508 [2024-11-26 18:08:32.741893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.508 [2024-11-26 18:08:32.804299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.508 [2024-11-26 18:08:32.820496] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:11:55.508 INFO: Running with entropic power schedule (0xFF, 100). 00:11:55.508 INFO: Seed: 4111603781 00:11:55.508 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:55.508 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:55.508 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:11:55.508 INFO: A corpus is not provided, starting from an empty corpus 00:11:55.508 #2 INITED exec/s: 0 rss: 65Mb 00:11:55.508 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:55.508 This may also happen if the target rejected all inputs we tried so far 00:11:55.508 [2024-11-26 18:08:32.866280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.508 [2024-11-26 18:08:32.866307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.508 [2024-11-26 18:08:32.866362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.508 [2024-11-26 18:08:32.866379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.766 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:11:55.766 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:55.766 #4 NEW cov: 12271 ft: 12258 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:11:55.766 [2024-11-26 18:08:33.016636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.766 [2024-11-26 18:08:33.016665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.766 [2024-11-26 18:08:33.016723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.766 [2024-11-26 18:08:33.016734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.766 NEW_FUNC[1/2]: 0x105e768 in posix_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1471 00:11:55.766 NEW_FUNC[2/2]: 0x1c7b2b8 in spdk_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:559 00:11:55.766 #5 NEW cov: 12397 ft: 12739 corp: 3/47b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CopyPart- 00:11:55.766 [2024-11-26 18:08:33.076733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.766 [2024-11-26 18:08:33.076756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.766 [2024-11-26 18:08:33.076811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.766 [2024-11-26 18:08:33.076826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.766 #6 NEW cov: 12403 ft: 13010 corp: 4/68b lim: 40 exec/s: 0 rss: 73Mb L: 21/23 MS: 1 EraseBytes- 00:11:55.766 [2024-11-26 18:08:33.117010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.766 [2024-11-26 18:08:33.117032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.766 [2024-11-26 18:08:33.117105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.767 [2024-11-26 18:08:33.117117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.767 [2024-11-26 18:08:33.117170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.767 [2024-11-26 18:08:33.117181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.767 #7 NEW cov: 12488 ft: 13598 corp: 5/94b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 CopyPart- 00:11:55.767 [2024-11-26 18:08:33.176995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b3b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.767 [2024-11-26 18:08:33.177018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.767 [2024-11-26 18:08:33.177073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.767 [2024-11-26 18:08:33.177085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.767 #8 NEW cov: 12488 ft: 13775 corp: 6/117b lim: 40 exec/s: 0 rss: 73Mb L: 23/26 MS: 1 ChangeBit- 00:11:56.025 [2024-11-26 18:08:33.216934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000f00a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.216957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 #12 NEW cov: 12488 ft: 14499 corp: 7/125b lim: 40 exec/s: 0 rss: 73Mb L: 8/26 MS: 4 CrossOver-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:11:56.025 [2024-11-26 18:08:33.257249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.257272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.257326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.257336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.025 #13 NEW cov: 12488 ft: 14614 corp: 8/148b lim: 40 exec/s: 0 rss: 73Mb L: 23/26 MS: 1 ChangeByte- 00:11:56.025 [2024-11-26 18:08:33.297378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.297402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.297457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.297468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.025 #14 NEW cov: 12488 ft: 14656 corp: 9/164b lim: 40 exec/s: 0 rss: 73Mb L: 16/26 MS: 1 EraseBytes- 00:11:56.025 [2024-11-26 18:08:33.357719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b160b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.357741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.357798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.357809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.357862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.357873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.025 #15 NEW cov: 12488 ft: 14702 corp: 10/188b lim: 40 exec/s: 0 rss: 73Mb L: 24/26 MS: 1 InsertByte- 00:11:56.025 [2024-11-26 18:08:33.397540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.397561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 #16 NEW cov: 12488 ft: 14738 corp: 11/201b lim: 40 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 EraseBytes- 00:11:56.025 [2024-11-26 18:08:33.438154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b3b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.438175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.438230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.438241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.438297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.438307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.025 [2024-11-26 18:08:33.438361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:000000b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.025 [2024-11-26 18:08:33.438371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:56.025 #17 NEW cov: 12488 ft: 15103 corp: 12/236b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:56.285 [2024-11-26 18:08:33.477858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.477880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.285 [2024-11-26 18:08:33.477935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.477947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.285 #18 NEW cov: 12488 ft: 15146 corp: 13/257b lim: 40 exec/s: 0 rss: 74Mb L: 21/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:11:56.285 [2024-11-26 18:08:33.538214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.538235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.285 [2024-11-26 18:08:33.538290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.538301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.285 [2024-11-26 18:08:33.538353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.538365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.285 #19 NEW cov: 12488 ft: 15175 corp: 14/283b lim: 40 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:11:56.285 [2024-11-26 18:08:33.598087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.598108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.285 #20 NEW cov: 12488 ft: 15202 corp: 15/291b lim: 40 exec/s: 0 rss: 74Mb L: 8/35 MS: 1 CopyPart- 00:11:56.285 [2024-11-26 18:08:33.658232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.658255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.285 #22 NEW cov: 12488 ft: 15228 corp: 16/300b lim: 40 exec/s: 0 rss: 74Mb L: 9/35 MS: 2 ShuffleBytes-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:11:56.285 [2024-11-26 18:08:33.698662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.698684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.285 [2024-11-26 18:08:33.698757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.698769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.285 [2024-11-26 18:08:33.698826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.285 [2024-11-26 18:08:33.698837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.285 #23 NEW cov: 12488 ft: 15263 corp: 17/326b lim: 40 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 ShuffleBytes- 00:11:56.545 [2024-11-26 18:08:33.738974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.738996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.739051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.739062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.739115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.739128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.739183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:210ab1b1 cdw11:b1210ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.739194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:56.545 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:56.545 #24 NEW cov: 12511 ft: 15300 corp: 18/364b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 CopyPart- 00:11:56.545 [2024-11-26 18:08:33.798972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b160b1b1 cdw11:b1b11ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.798994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.799047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.799058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.799113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.799124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.545 #30 NEW cov: 12511 ft: 15310 corp: 19/388b lim: 40 exec/s: 0 rss: 74Mb L: 24/38 MS: 1 ChangeByte- 00:11:56.545 [2024-11-26 18:08:33.859121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b160b1b1 cdw11:b1b11ab1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.859142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.859212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.859224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.859277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.859287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.545 #31 NEW cov: 12511 ft: 15332 corp: 20/412b lim: 40 exec/s: 31 rss: 74Mb L: 24/38 MS: 1 CopyPart- 00:11:56.545 [2024-11-26 18:08:33.919127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.919149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.919205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:10000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.919217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.545 #32 NEW cov: 12511 ft: 15339 corp: 21/428b lim: 40 exec/s: 32 rss: 74Mb L: 16/38 MS: 1 ChangeBinInt- 00:11:56.545 [2024-11-26 18:08:33.969848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.969871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.969930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.545 [2024-11-26 18:08:33.969941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.545 [2024-11-26 18:08:33.969995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b12ab1b1 cdw11:b1210a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.546 [2024-11-26 18:08:33.970006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.546 [2024-11-26 18:08:33.970061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.546 [2024-11-26 18:08:33.970071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:56.546 [2024-11-26 18:08:33.970124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.546 [2024-11-26 18:08:33.970135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:56.805 #33 NEW cov: 12511 ft: 15440 corp: 22/468b lim: 40 exec/s: 33 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:56.805 [2024-11-26 18:08:34.029467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.029490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.805 [2024-11-26 18:08:34.029550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b121 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.029561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.805 #34 NEW cov: 12511 ft: 15463 corp: 23/489b lim: 40 exec/s: 34 rss: 74Mb L: 21/40 MS: 1 CrossOver- 00:11:56.805 [2024-11-26 18:08:34.069523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.069546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.805 [2024-11-26 18:08:34.069616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.069627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.805 #35 NEW cov: 12511 ft: 15475 corp: 24/512b lim: 40 exec/s: 35 rss: 74Mb L: 23/40 MS: 1 ShuffleBytes- 00:11:56.805 [2024-11-26 18:08:34.109669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b3b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.109690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.805 [2024-11-26 18:08:34.109761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:5bb1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.109772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.805 #36 NEW cov: 12511 ft: 15483 corp: 25/535b lim: 40 exec/s: 36 rss: 74Mb L: 23/40 MS: 1 ChangeByte- 00:11:56.805 [2024-11-26 18:08:34.149613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.149639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.805 #37 NEW cov: 12511 ft: 15496 corp: 26/547b lim: 40 exec/s: 37 rss: 74Mb L: 12/40 MS: 1 EraseBytes- 00:11:56.805 [2024-11-26 18:08:34.209967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b3b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.209990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.805 [2024-11-26 18:08:34.210061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1e8 cdw11:5bb1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.805 [2024-11-26 18:08:34.210072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.805 #38 NEW cov: 12511 ft: 15506 corp: 27/570b lim: 40 exec/s: 38 rss: 74Mb L: 23/40 MS: 1 ChangeByte- 00:11:57.066 [2024-11-26 18:08:34.270348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b160b1b1 cdw11:b1b17878 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.270371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.270446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7878b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.270457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.270515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.270525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.066 #39 NEW cov: 12511 ft: 15538 corp: 28/598b lim: 40 exec/s: 39 rss: 74Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:11:57.066 [2024-11-26 18:08:34.310077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.310101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.066 #40 NEW cov: 12511 ft: 15553 corp: 29/607b lim: 40 exec/s: 40 rss: 74Mb L: 9/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:11:57.066 [2024-11-26 18:08:34.350389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.350413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.350484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.350496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.066 #41 NEW cov: 12511 ft: 15645 corp: 30/630b lim: 40 exec/s: 41 rss: 74Mb L: 23/40 MS: 1 ShuffleBytes- 00:11:57.066 [2024-11-26 18:08:34.410978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.411000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.411073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:10000000 cdw11:0000b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.411087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.411142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b11000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.411153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.411208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.411219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.066 #42 NEW cov: 12511 ft: 15656 corp: 31/662b lim: 40 exec/s: 42 rss: 74Mb L: 32/40 MS: 1 CopyPart- 00:11:57.066 [2024-11-26 18:08:34.470751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.470773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.066 [2024-11-26 18:08:34.470829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.066 [2024-11-26 18:08:34.470840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.066 #43 NEW cov: 12511 ft: 15668 corp: 32/678b lim: 40 exec/s: 43 rss: 75Mb L: 16/40 MS: 1 CopyPart- 00:11:57.325 [2024-11-26 18:08:34.521504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.521528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.521586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.521597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.521652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b12ab1b1 cdw11:b1210a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.521663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.521719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0000002e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.521730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.521784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.521794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:57.326 #44 NEW cov: 12511 ft: 15693 corp: 33/718b lim: 40 exec/s: 44 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:11:57.326 [2024-11-26 18:08:34.581287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1210a26 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.581309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.581364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.581385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.581440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.581451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.621340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b10a2621 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.621362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.621425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.621438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.621493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.621503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.326 #46 NEW cov: 12511 ft: 15696 corp: 34/745b lim: 40 exec/s: 46 rss: 75Mb L: 27/40 MS: 2 InsertByte-ShuffleBytes- 00:11:57.326 [2024-11-26 18:08:34.661490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3060b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.661511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.661567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.661578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.661634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.661660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.326 #47 NEW cov: 12511 ft: 15702 corp: 35/769b lim: 40 exec/s: 47 rss: 75Mb L: 24/40 MS: 1 ChangeByte- 00:11:57.326 [2024-11-26 18:08:34.701795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b160b1b1 cdw11:b1b17878 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.701817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.701876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7860b1b1 cdw11:b178b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.701887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.701943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.701954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.702007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1210a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.702018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.326 #48 NEW cov: 12511 ft: 15717 corp: 36/801b lim: 40 exec/s: 48 rss: 75Mb L: 32/40 MS: 1 CopyPart- 00:11:57.326 [2024-11-26 18:08:34.761782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.761803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.761859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.761870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.326 [2024-11-26 18:08:34.761925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:b1000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.326 [2024-11-26 18:08:34.761936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.586 #49 NEW cov: 12511 ft: 15734 corp: 37/832b lim: 40 exec/s: 49 rss: 75Mb L: 31/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:11:57.586 [2024-11-26 18:08:34.801688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.801709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.586 [2024-11-26 18:08:34.801764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.801774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.586 #50 NEW cov: 12511 ft: 15742 corp: 38/853b lim: 40 exec/s: 50 rss: 75Mb L: 21/40 MS: 1 ChangeByte- 00:11:57.586 [2024-11-26 18:08:34.852365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:b1b1b1b1 cdw11:b1b1b1b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.852393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.586 [2024-11-26 18:08:34.852455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b1b1b1b1 cdw11:b1b10000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.852467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.586 [2024-11-26 18:08:34.852520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.852530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.586 [2024-11-26 18:08:34.852586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.852595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.586 [2024-11-26 18:08:34.852648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.586 [2024-11-26 18:08:34.852659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:57.586 #51 NEW cov: 12511 ft: 15750 corp: 39/893b lim: 40 exec/s: 25 rss: 75Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:11:57.586 #51 DONE cov: 12511 ft: 15750 corp: 39/893b lim: 40 exec/s: 25 rss: 75Mb 00:11:57.586 ###### Recommended dictionary. ###### 00:11:57.586 "\000\000\000\000\000\000\000\000" # Uses: 5 00:11:57.586 ###### End of recommended dictionary. ###### 00:11:57.586 Done 51 runs in 2 second(s) 00:11:57.586 18:08:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:11:57.586 18:08:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:57.586 18:08:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:57.586 18:08:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:57.586 18:08:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:11:57.846 [2024-11-26 18:08:35.041408] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:11:57.846 [2024-11-26 18:08:35.041466] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291687 ] 00:11:57.846 [2024-11-26 18:08:35.258463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.106 [2024-11-26 18:08:35.298652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.106 [2024-11-26 18:08:35.361055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.106 [2024-11-26 18:08:35.377244] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:11:58.106 INFO: Running with entropic power schedule (0xFF, 100). 00:11:58.106 INFO: Seed: 2373646991 00:11:58.106 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:11:58.106 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:11:58.106 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:11:58.106 INFO: A corpus is not provided, starting from an empty corpus 00:11:58.106 #2 INITED exec/s: 0 rss: 65Mb 00:11:58.106 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:58.106 This may also happen if the target rejected all inputs we tried so far 00:11:58.106 [2024-11-26 18:08:35.422942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.106 [2024-11-26 18:08:35.422973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 NEW_FUNC[1/716]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:11:58.366 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:58.366 #7 NEW cov: 12272 ft: 12255 corp: 2/16b lim: 40 exec/s: 0 rss: 73Mb L: 15/15 MS: 5 CopyPart-ShuffleBytes-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:11:58.366 [2024-11-26 18:08:35.573269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.573300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 #8 NEW cov: 12385 ft: 12968 corp: 3/25b lim: 40 exec/s: 0 rss: 73Mb L: 9/15 MS: 1 InsertRepeatedBytes- 00:11:58.366 [2024-11-26 18:08:35.613297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff090000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.613321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 #9 NEW cov: 12391 ft: 13232 corp: 4/34b lim: 40 exec/s: 0 rss: 73Mb L: 9/15 MS: 1 ChangeBinInt- 00:11:58.366 [2024-11-26 18:08:35.673651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.673676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 [2024-11-26 18:08:35.673736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.673747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.366 #10 NEW cov: 12476 ft: 13750 corp: 5/50b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 InsertByte- 00:11:58.366 [2024-11-26 18:08:35.733646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.733668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 #16 NEW cov: 12476 ft: 13860 corp: 6/65b lim: 40 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 ChangeBit- 00:11:58.366 [2024-11-26 18:08:35.773752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.366 [2024-11-26 18:08:35.773774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.366 #17 NEW cov: 12476 ft: 13898 corp: 7/80b lim: 40 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 ShuffleBytes- 00:11:58.625 [2024-11-26 18:08:35.813897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.813920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.626 #23 NEW cov: 12476 ft: 13980 corp: 8/89b lim: 40 exec/s: 0 rss: 73Mb L: 9/16 MS: 1 CMP- DE: "\000\000\000\000\000\000\003\363"- 00:11:58.626 [2024-11-26 18:08:35.854592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.854615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.854673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.854684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.854739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.854766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.854824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.854835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.854891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.854903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.626 #24 NEW cov: 12476 ft: 14589 corp: 9/129b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:11:58.626 [2024-11-26 18:08:35.914727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.914749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.914806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.914817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.914872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.914883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.914937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.914948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.915005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.915015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.626 #25 NEW cov: 12476 ft: 14720 corp: 10/169b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:11:58.626 [2024-11-26 18:08:35.974913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.974934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.975010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:008d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.975022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.975080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.975094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.975150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.975161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.626 [2024-11-26 18:08:35.975219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:35.975230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.626 #26 NEW cov: 12476 ft: 14759 corp: 11/209b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:11:58.626 [2024-11-26 18:08:36.034495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5a5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.626 [2024-11-26 18:08:36.034516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.626 #27 NEW cov: 12476 ft: 14776 corp: 12/224b lim: 40 exec/s: 0 rss: 73Mb L: 15/40 MS: 1 ChangeBit- 00:11:58.886 [2024-11-26 18:08:36.074599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff090010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.074621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.886 #28 NEW cov: 12476 ft: 14783 corp: 13/233b lim: 40 exec/s: 0 rss: 73Mb L: 9/40 MS: 1 ChangeBit- 00:11:58.886 [2024-11-26 18:08:36.135053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.135074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.135132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.135143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.135215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.135226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.886 #29 NEW cov: 12476 ft: 14973 corp: 14/257b lim: 40 exec/s: 0 rss: 74Mb L: 24/40 MS: 1 CopyPart- 00:11:58.886 [2024-11-26 18:08:36.195549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.195570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.195643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.195655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.195712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.195723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.195784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.195795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.195850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.195861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.886 #30 NEW cov: 12476 ft: 15017 corp: 15/297b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:11:58.886 [2024-11-26 18:08:36.235657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.235679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.235736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.235747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.235803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.235814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.235870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.235880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.235935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.235945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.886 #31 NEW cov: 12476 ft: 15031 corp: 16/337b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:11:58.886 [2024-11-26 18:08:36.275780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.275803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.275860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.275871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.886 [2024-11-26 18:08:36.275928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.886 [2024-11-26 18:08:36.275939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.887 [2024-11-26 18:08:36.276008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.887 [2024-11-26 18:08:36.276018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.887 [2024-11-26 18:08:36.276077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b7d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:58.887 [2024-11-26 18:08:36.276087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:58.887 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:58.887 #32 NEW cov: 12499 ft: 15066 corp: 17/377b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:11:59.146 [2024-11-26 18:08:36.335345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.335367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.146 #33 NEW cov: 12499 ft: 15076 corp: 18/386b lim: 40 exec/s: 0 rss: 74Mb L: 9/40 MS: 1 EraseBytes- 00:11:59.146 [2024-11-26 18:08:36.375469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff2bffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.375493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.146 #34 NEW cov: 12499 ft: 15100 corp: 19/395b lim: 40 exec/s: 0 rss: 74Mb L: 9/40 MS: 1 ChangeByte- 00:11:59.146 [2024-11-26 18:08:36.415582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.415604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.146 #35 NEW cov: 12499 ft: 15124 corp: 20/407b lim: 40 exec/s: 35 rss: 74Mb L: 12/40 MS: 1 CopyPart- 00:11:59.146 [2024-11-26 18:08:36.476355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.476383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.476460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:008d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.476471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.476531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.476542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.476599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.476610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.476667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5bf25b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.476678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:59.146 #36 NEW cov: 12499 ft: 15130 corp: 21/447b lim: 40 exec/s: 36 rss: 74Mb L: 40/40 MS: 1 ChangeByte- 00:11:59.146 [2024-11-26 18:08:36.536520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.536542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.536603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.536614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.536670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.536681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.536736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.536746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.146 [2024-11-26 18:08:36.536801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.146 [2024-11-26 18:08:36.536811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:59.146 #37 NEW cov: 12499 ft: 15143 corp: 22/487b lim: 40 exec/s: 37 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:11:59.406 [2024-11-26 18:08:36.596387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.596410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.406 [2024-11-26 18:08:36.596469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.596480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.406 [2024-11-26 18:08:36.596538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.596548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.406 #38 NEW cov: 12499 ft: 15223 corp: 23/514b lim: 40 exec/s: 38 rss: 74Mb L: 27/40 MS: 1 CrossOver- 00:11:59.406 [2024-11-26 18:08:36.656573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.656606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.406 [2024-11-26 18:08:36.656684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.656695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.406 [2024-11-26 18:08:36.656751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.656762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.406 #39 NEW cov: 12499 ft: 15231 corp: 24/543b lim: 40 exec/s: 39 rss: 74Mb L: 29/40 MS: 1 CopyPart- 00:11:59.406 [2024-11-26 18:08:36.696363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.696397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.406 #40 NEW cov: 12499 ft: 15289 corp: 25/558b lim: 40 exec/s: 40 rss: 74Mb L: 15/40 MS: 1 ShuffleBytes- 00:11:59.406 [2024-11-26 18:08:36.736497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.406 [2024-11-26 18:08:36.736519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.407 #41 NEW cov: 12499 ft: 15303 corp: 26/571b lim: 40 exec/s: 41 rss: 74Mb L: 13/40 MS: 1 CrossOver- 00:11:59.407 [2024-11-26 18:08:36.786655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5bafa4a4 cdw11:a4a4a4a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.407 [2024-11-26 18:08:36.786676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.407 #42 NEW cov: 12499 ft: 15339 corp: 27/586b lim: 40 exec/s: 42 rss: 74Mb L: 15/40 MS: 1 ChangeBinInt- 00:11:59.407 [2024-11-26 18:08:36.836796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5a5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.407 [2024-11-26 18:08:36.836819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 #43 NEW cov: 12499 ft: 15350 corp: 28/599b lim: 40 exec/s: 43 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:11:59.666 [2024-11-26 18:08:36.896976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5a5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.896999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 #44 NEW cov: 12499 ft: 15361 corp: 29/612b lim: 40 exec/s: 44 rss: 74Mb L: 13/40 MS: 1 ChangeByte- 00:11:59.666 [2024-11-26 18:08:36.947738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.947760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:36.947833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:008d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.947844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:36.947901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.947912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:36.947968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.947978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:36.948033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5b5b8c5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.948060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:59.666 #45 NEW cov: 12499 ft: 15368 corp: 30/652b lim: 40 exec/s: 45 rss: 74Mb L: 40/40 MS: 1 ChangeByte- 00:11:59.666 [2024-11-26 18:08:36.987407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.987433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:36.987507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:36.987518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.666 #46 NEW cov: 12499 ft: 15380 corp: 31/673b lim: 40 exec/s: 46 rss: 74Mb L: 21/40 MS: 1 InsertRepeatedBytes- 00:11:59.666 [2024-11-26 18:08:37.027675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:a85b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.027697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.027774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.027785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.027842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.027853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.666 #47 NEW cov: 12499 ft: 15391 corp: 32/700b lim: 40 exec/s: 47 rss: 74Mb L: 27/40 MS: 1 ChangeBinInt- 00:11:59.666 [2024-11-26 18:08:37.088084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.088107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.088166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ad0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.088177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.088235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.088245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.088301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.088312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.666 [2024-11-26 18:08:37.088366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:5bf25b5b cdw11:5b5b5b5d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.666 [2024-11-26 18:08:37.088382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:59.925 #48 NEW cov: 12499 ft: 15392 corp: 33/740b lim: 40 exec/s: 48 rss: 74Mb L: 40/40 MS: 1 ChangeBit- 00:11:59.925 [2024-11-26 18:08:37.147641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff2b40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.925 [2024-11-26 18:08:37.147663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.925 #49 NEW cov: 12499 ft: 15424 corp: 34/750b lim: 40 exec/s: 49 rss: 74Mb L: 10/40 MS: 1 InsertByte- 00:11:59.925 [2024-11-26 18:08:37.197833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff2b40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.926 [2024-11-26 18:08:37.197856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.926 #50 NEW cov: 12499 ft: 15461 corp: 35/760b lim: 40 exec/s: 50 rss: 75Mb L: 10/40 MS: 1 CopyPart- 00:11:59.926 [2024-11-26 18:08:37.258142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.926 [2024-11-26 18:08:37.258164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.926 [2024-11-26 18:08:37.258239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b895b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.926 [2024-11-26 18:08:37.258250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.926 #51 NEW cov: 12499 ft: 15476 corp: 36/781b lim: 40 exec/s: 51 rss: 75Mb L: 21/40 MS: 1 EraseBytes- 00:11:59.926 [2024-11-26 18:08:37.298122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.926 [2024-11-26 18:08:37.298144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.926 #52 NEW cov: 12499 ft: 15481 corp: 37/790b lim: 40 exec/s: 52 rss: 75Mb L: 9/40 MS: 1 CopyPart- 00:11:59.926 [2024-11-26 18:08:37.338235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5a5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:59.926 [2024-11-26 18:08:37.338256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.185 #53 NEW cov: 12499 ft: 15487 corp: 38/803b lim: 40 exec/s: 53 rss: 75Mb L: 13/40 MS: 1 ChangeBinInt- 00:12:00.185 [2024-11-26 18:08:37.398427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:00.185 [2024-11-26 18:08:37.398451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.185 #55 NEW cov: 12499 ft: 15547 corp: 39/815b lim: 40 exec/s: 27 rss: 75Mb L: 12/40 MS: 2 EraseBytes-CopyPart- 00:12:00.185 #55 DONE cov: 12499 ft: 15547 corp: 39/815b lim: 40 exec/s: 27 rss: 75Mb 00:12:00.185 ###### Recommended dictionary. ###### 00:12:00.185 "\000\000\000\000\000\000\003\363" # Uses: 0 00:12:00.185 ###### End of recommended dictionary. ###### 00:12:00.185 Done 55 runs in 2 second(s) 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:00.185 18:08:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:12:00.186 [2024-11-26 18:08:37.606136] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:00.186 [2024-11-26 18:08:37.606217] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292208 ] 00:12:00.445 [2024-11-26 18:08:37.822061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.445 [2024-11-26 18:08:37.861743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.704 [2024-11-26 18:08:37.924133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.704 [2024-11-26 18:08:37.940325] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:12:00.704 INFO: Running with entropic power schedule (0xFF, 100). 00:12:00.704 INFO: Seed: 639659963 00:12:00.704 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:00.704 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:00.704 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:00.704 INFO: A corpus is not provided, starting from an empty corpus 00:12:00.704 #2 INITED exec/s: 0 rss: 65Mb 00:12:00.704 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:00.704 This may also happen if the target rejected all inputs we tried so far 00:12:00.704 [2024-11-26 18:08:37.988133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.704 [2024-11-26 18:08:37.988158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.704 NEW_FUNC[1/716]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:12:00.704 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:00.704 #5 NEW cov: 12262 ft: 12263 corp: 2/10b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:12:00.704 [2024-11-26 18:08:38.138988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.704 [2024-11-26 18:08:38.139015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.704 [2024-11-26 18:08:38.139090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.704 [2024-11-26 18:08:38.139102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.704 [2024-11-26 18:08:38.139162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.704 [2024-11-26 18:08:38.139177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.963 NEW_FUNC[1/1]: 0x1514598 in nvmf_tcp_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3555 00:12:00.963 #6 NEW cov: 12379 ft: 13444 corp: 3/31b lim: 35 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:12:00.963 [2024-11-26 18:08:38.179025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.179050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.179129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.179142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.179203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.179216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.963 #7 NEW cov: 12392 ft: 13729 corp: 4/58b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:12:00.963 [2024-11-26 18:08:38.239411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.239434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.239493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.239507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.239566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.239580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.239638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.239649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.963 #8 NEW cov: 12477 ft: 14335 corp: 5/91b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:12:00.963 [2024-11-26 18:08:38.299390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.299416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.299476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.299492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.299554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.299566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.963 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:12:00.963 #9 NEW cov: 12487 ft: 14504 corp: 6/117b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 CrossOver- 00:12:00.963 [2024-11-26 18:08:38.339542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.339567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.339654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.339668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.339724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.339737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.963 #10 NEW cov: 12487 ft: 14583 corp: 7/143b lim: 35 exec/s: 0 rss: 74Mb L: 26/33 MS: 1 ChangeBit- 00:12:00.963 [2024-11-26 18:08:38.399665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.399689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.399750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.399763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.963 [2024-11-26 18:08:38.399838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:00.963 [2024-11-26 18:08:38.399852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.223 #11 NEW cov: 12487 ft: 14640 corp: 8/165b lim: 35 exec/s: 0 rss: 74Mb L: 22/33 MS: 1 CrossOver- 00:12:01.223 [2024-11-26 18:08:38.440036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.440059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.440120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.440131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.440190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000099 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.440203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.440261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.440271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.223 #12 NEW cov: 12487 ft: 14696 corp: 9/194b lim: 35 exec/s: 0 rss: 74Mb L: 29/33 MS: 1 InsertRepeatedBytes- 00:12:01.223 [2024-11-26 18:08:38.499967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.499991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.500051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.500067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.500123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.500136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.223 #13 NEW cov: 12487 ft: 14799 corp: 10/221b lim: 35 exec/s: 0 rss: 74Mb L: 27/33 MS: 1 CrossOver- 00:12:01.223 [2024-11-26 18:08:38.539701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.539725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.223 #14 NEW cov: 12487 ft: 14886 corp: 11/231b lim: 35 exec/s: 0 rss: 74Mb L: 10/33 MS: 1 CrossOver- 00:12:01.223 NEW_FUNC[1/1]: 0x138ebc8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1768 00:12:01.223 #18 NEW cov: 12510 ft: 14985 corp: 12/239b lim: 35 exec/s: 0 rss: 74Mb L: 8/33 MS: 4 InsertByte-CopyPart-ChangeByte-CopyPart- 00:12:01.223 [2024-11-26 18:08:38.620328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.620352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.620430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.620443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.620501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.620512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.223 #19 NEW cov: 12510 ft: 15011 corp: 13/261b lim: 35 exec/s: 0 rss: 74Mb L: 22/33 MS: 1 InsertByte- 00:12:01.223 [2024-11-26 18:08:38.660423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.660446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.660505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.660516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.223 [2024-11-26 18:08:38.660589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.223 [2024-11-26 18:08:38.660601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.482 #20 NEW cov: 12510 ft: 15017 corp: 14/282b lim: 35 exec/s: 0 rss: 74Mb L: 21/33 MS: 1 ChangeBinInt- 00:12:01.482 [2024-11-26 18:08:38.700552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.700574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.482 [2024-11-26 18:08:38.700635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.700646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.482 [2024-11-26 18:08:38.700706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.700720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.482 #21 NEW cov: 12510 ft: 15034 corp: 15/304b lim: 35 exec/s: 0 rss: 74Mb L: 22/33 MS: 1 ShuffleBytes- 00:12:01.482 [2024-11-26 18:08:38.760326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.760348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.482 #22 NEW cov: 12510 ft: 15058 corp: 16/312b lim: 35 exec/s: 0 rss: 74Mb L: 8/33 MS: 1 EraseBytes- 00:12:01.482 [2024-11-26 18:08:38.801020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.801042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.482 [2024-11-26 18:08:38.801099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.801112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.482 [2024-11-26 18:08:38.801170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.801183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.482 [2024-11-26 18:08:38.801242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.801257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.482 #23 NEW cov: 12510 ft: 15123 corp: 17/343b lim: 35 exec/s: 0 rss: 74Mb L: 31/33 MS: 1 InsertRepeatedBytes- 00:12:01.482 [2024-11-26 18:08:38.841131] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.482 [2024-11-26 18:08:38.841154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.841216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.841230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.841290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.841302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.841361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.841376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.483 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:01.483 #24 NEW cov: 12533 ft: 15147 corp: 18/377b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:12:01.483 [2024-11-26 18:08:38.901291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.901312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.901389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000077 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.901404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.901462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000077 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.901473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.483 [2024-11-26 18:08:38.901534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000077 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.483 [2024-11-26 18:08:38.901545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.483 #26 NEW cov: 12533 ft: 15165 corp: 19/408b lim: 35 exec/s: 0 rss: 74Mb L: 31/34 MS: 2 ChangeByte-InsertRepeatedBytes- 00:12:01.742 [2024-11-26 18:08:38.941438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.742 [2024-11-26 18:08:38.941460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.742 [2024-11-26 18:08:38.941536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:38.941547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:38.941606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:38.941617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:38.941679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:38.941690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.743 #27 NEW cov: 12533 ft: 15171 corp: 20/441b lim: 35 exec/s: 27 rss: 74Mb L: 33/34 MS: 1 CopyPart- 00:12:01.743 [2024-11-26 18:08:39.001389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.001411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.001471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.001483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.001544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.001557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.743 #28 NEW cov: 12533 ft: 15194 corp: 21/468b lim: 35 exec/s: 28 rss: 74Mb L: 27/34 MS: 1 ChangeBit- 00:12:01.743 [2024-11-26 18:08:39.061354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.061379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.061439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.061451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.743 #29 NEW cov: 12533 ft: 15383 corp: 22/486b lim: 35 exec/s: 29 rss: 74Mb L: 18/34 MS: 1 InsertRepeatedBytes- 00:12:01.743 [2024-11-26 18:08:39.121515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.121537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.121598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.121612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.743 #30 NEW cov: 12533 ft: 15397 corp: 23/505b lim: 35 exec/s: 30 rss: 74Mb L: 19/34 MS: 1 CrossOver- 00:12:01.743 [2024-11-26 18:08:39.182087] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.182111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.182169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.182182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.182240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.182252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.743 [2024-11-26 18:08:39.182310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:01.743 [2024-11-26 18:08:39.182322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.001 #31 NEW cov: 12533 ft: 15414 corp: 24/538b lim: 35 exec/s: 31 rss: 75Mb L: 33/34 MS: 1 CopyPart- 00:12:02.001 #32 NEW cov: 12533 ft: 15429 corp: 25/546b lim: 35 exec/s: 32 rss: 75Mb L: 8/34 MS: 1 ChangeBinInt- 00:12:02.001 [2024-11-26 18:08:39.292577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.292601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.292677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.292690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.292749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.292763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.292822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.292835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.292894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.292908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:02.001 #36 NEW cov: 12533 ft: 15483 corp: 26/581b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 4 EraseBytes-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:12:02.001 [2024-11-26 18:08:39.332486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.332508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.332570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.332582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.332640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.332653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.332709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.332722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.001 #37 NEW cov: 12533 ft: 15489 corp: 27/609b lim: 35 exec/s: 37 rss: 75Mb L: 28/35 MS: 1 InsertByte- 00:12:02.001 [2024-11-26 18:08:39.372456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.372478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.372540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.372552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.372611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.372625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.001 #38 NEW cov: 12533 ft: 15504 corp: 28/631b lim: 35 exec/s: 38 rss: 75Mb L: 22/35 MS: 1 EraseBytes- 00:12:02.001 [2024-11-26 18:08:39.412772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.412794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.412853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.412866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.412925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.412937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.001 [2024-11-26 18:08:39.412996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.001 [2024-11-26 18:08:39.413009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.001 #39 NEW cov: 12533 ft: 15537 corp: 29/663b lim: 35 exec/s: 39 rss: 75Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:12:02.260 [2024-11-26 18:08:39.452512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.452541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.260 [2024-11-26 18:08:39.452602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.452613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.260 #40 NEW cov: 12533 ft: 15563 corp: 30/681b lim: 35 exec/s: 40 rss: 75Mb L: 18/35 MS: 1 CrossOver- 00:12:02.260 [2024-11-26 18:08:39.512901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.512926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.260 [2024-11-26 18:08:39.513002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.513016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.260 [2024-11-26 18:08:39.513072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.513086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.260 #41 NEW cov: 12533 ft: 15595 corp: 31/707b lim: 35 exec/s: 41 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:12:02.260 [2024-11-26 18:08:39.553023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.553045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.260 [2024-11-26 18:08:39.553104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.553116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.260 [2024-11-26 18:08:39.553191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.260 [2024-11-26 18:08:39.553205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.260 #42 NEW cov: 12533 ft: 15598 corp: 32/734b lim: 35 exec/s: 42 rss: 75Mb L: 27/35 MS: 1 CopyPart- 00:12:02.261 [2024-11-26 18:08:39.592950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.261 [2024-11-26 18:08:39.592974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.261 [2024-11-26 18:08:39.593033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.261 [2024-11-26 18:08:39.593044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.261 #43 NEW cov: 12533 ft: 15609 corp: 33/752b lim: 35 exec/s: 43 rss: 75Mb L: 18/35 MS: 1 CopyPart- 00:12:02.261 [2024-11-26 18:08:39.653351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.261 [2024-11-26 18:08:39.653377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.261 [2024-11-26 18:08:39.653451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.261 [2024-11-26 18:08:39.653467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.261 [2024-11-26 18:08:39.653522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.261 [2024-11-26 18:08:39.653535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.261 #44 NEW cov: 12533 ft: 15627 corp: 34/776b lim: 35 exec/s: 44 rss: 75Mb L: 24/35 MS: 1 EraseBytes- 00:12:02.520 [2024-11-26 18:08:39.713725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.713750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.713824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.713838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.713896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.713910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.713969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.713982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.520 #45 NEW cov: 12533 ft: 15644 corp: 35/808b lim: 35 exec/s: 45 rss: 75Mb L: 32/35 MS: 1 ShuffleBytes- 00:12:02.520 [2024-11-26 18:08:39.773851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.773874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.773934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.773946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.774003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000099 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.774016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.774074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.774088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.520 #46 NEW cov: 12533 ft: 15651 corp: 36/841b lim: 35 exec/s: 46 rss: 75Mb L: 33/35 MS: 1 CrossOver- 00:12:02.520 [2024-11-26 18:08:39.833843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.833866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.833925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.833938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.834011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.834027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.520 #47 NEW cov: 12533 ft: 15678 corp: 37/865b lim: 35 exec/s: 47 rss: 75Mb L: 24/35 MS: 1 ChangeBinInt- 00:12:02.520 [2024-11-26 18:08:39.894014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.894037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.894096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:5 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.894110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.894170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.894180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.520 #48 NEW cov: 12533 ft: 15693 corp: 38/886b lim: 35 exec/s: 48 rss: 75Mb L: 21/35 MS: 1 CrossOver- 00:12:02.520 [2024-11-26 18:08:39.954143] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.954168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.954245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.954258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.520 [2024-11-26 18:08:39.954317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000091 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:02.520 [2024-11-26 18:08:39.954330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.779 #49 NEW cov: 12533 ft: 15701 corp: 39/913b lim: 35 exec/s: 24 rss: 76Mb L: 27/35 MS: 1 InsertByte- 00:12:02.779 #49 DONE cov: 12533 ft: 15701 corp: 39/913b lim: 35 exec/s: 24 rss: 76Mb 00:12:02.779 Done 49 runs in 2 second(s) 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:12:02.779 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:02.780 18:08:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:12:02.780 [2024-11-26 18:08:40.164521] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:02.780 [2024-11-26 18:08:40.164600] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292720 ] 00:12:03.038 [2024-11-26 18:08:40.377683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.038 [2024-11-26 18:08:40.417584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.038 [2024-11-26 18:08:40.479950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.298 [2024-11-26 18:08:40.496141] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:12:03.298 INFO: Running with entropic power schedule (0xFF, 100). 00:12:03.298 INFO: Seed: 3196675379 00:12:03.298 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:03.298 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:03.298 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:03.298 INFO: A corpus is not provided, starting from an empty corpus 00:12:03.298 #2 INITED exec/s: 0 rss: 66Mb 00:12:03.298 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:03.298 This may also happen if the target rejected all inputs we tried so far 00:12:03.298 [2024-11-26 18:08:40.541802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.298 [2024-11-26 18:08:40.541830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.298 NEW_FUNC[1/716]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:12:03.298 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:03.298 #4 NEW cov: 12253 ft: 12232 corp: 2/10b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 ShuffleBytes-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:12:03.298 [2024-11-26 18:08:40.692088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.298 [2024-11-26 18:08:40.692115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.298 #10 NEW cov: 12367 ft: 12621 corp: 3/19b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:12:03.557 [2024-11-26 18:08:40.752158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.752181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.557 #11 NEW cov: 12373 ft: 12971 corp: 4/28b lim: 35 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:12:03.557 [2024-11-26 18:08:40.792299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.792324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.557 #12 NEW cov: 12458 ft: 13341 corp: 5/39b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CrossOver- 00:12:03.557 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:12:03.557 #16 NEW cov: 12472 ft: 13448 corp: 6/52b lim: 35 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 ShuffleBytes-CopyPart-ChangeBit-InsertRepeatedBytes- 00:12:03.557 [2024-11-26 18:08:40.872640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.872664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.557 [2024-11-26 18:08:40.872737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.872749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.557 #17 NEW cov: 12472 ft: 13890 corp: 7/69b lim: 35 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 CMP- DE: "\001\000\000\000\002WkI"- 00:12:03.557 NEW_FUNC[1/2]: 0x4705d8 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:12:03.557 NEW_FUNC[2/2]: 0x137d8c8 in nvmf_ctrlr_get_features_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1709 00:12:03.557 #21 NEW cov: 12521 ft: 14239 corp: 8/91b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 4 ShuffleBytes-InsertByte-ChangeByte-InsertRepeatedBytes- 00:12:03.557 [2024-11-26 18:08:40.972934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.972956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.557 [2024-11-26 18:08:40.973014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000246 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.557 [2024-11-26 18:08:40.973031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.557 #22 NEW cov: 12521 ft: 14270 corp: 9/108b lim: 35 exec/s: 0 rss: 74Mb L: 17/22 MS: 1 CMP- DE: "\377\204FV\206\031\011\322"- 00:12:03.815 [2024-11-26 18:08:41.013051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000246 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.013072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.815 [2024-11-26 18:08:41.013129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.013140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.815 #23 NEW cov: 12521 ft: 14306 corp: 10/125b lim: 35 exec/s: 0 rss: 74Mb L: 17/22 MS: 1 CopyPart- 00:12:03.815 [2024-11-26 18:08:41.073214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.073236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.815 [2024-11-26 18:08:41.073294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000246 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.073305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.815 #24 NEW cov: 12521 ft: 14368 corp: 11/141b lim: 35 exec/s: 0 rss: 74Mb L: 16/22 MS: 1 EraseBytes- 00:12:03.815 [2024-11-26 18:08:41.113238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.113264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.815 #25 NEW cov: 12521 ft: 14400 corp: 12/150b lim: 35 exec/s: 0 rss: 74Mb L: 9/22 MS: 1 ShuffleBytes- 00:12:03.815 [2024-11-26 18:08:41.173703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.173726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.815 [2024-11-26 18:08:41.173783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.173794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:03.815 #26 NEW cov: 12521 ft: 14419 corp: 13/172b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 CopyPart- 00:12:03.815 [2024-11-26 18:08:41.233675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.233698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.815 [2024-11-26 18:08:41.233755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000246 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:03.815 [2024-11-26 18:08:41.233774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.072 #27 NEW cov: 12521 ft: 14439 corp: 14/188b lim: 35 exec/s: 0 rss: 74Mb L: 16/22 MS: 1 ChangeByte- 00:12:04.072 [2024-11-26 18:08:41.293701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.072 [2024-11-26 18:08:41.293724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.072 #28 NEW cov: 12521 ft: 14473 corp: 15/197b lim: 35 exec/s: 0 rss: 74Mb L: 9/22 MS: 1 EraseBytes- 00:12:04.072 [2024-11-26 18:08:41.353852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.072 [2024-11-26 18:08:41.353875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.072 #29 NEW cov: 12521 ft: 14498 corp: 16/207b lim: 35 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 InsertByte- 00:12:04.072 [2024-11-26 18:08:41.414059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.072 [2024-11-26 18:08:41.414083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.072 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:04.072 #30 NEW cov: 12544 ft: 14552 corp: 17/217b lim: 35 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 ChangeBit- 00:12:04.072 [2024-11-26 18:08:41.474208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.072 [2024-11-26 18:08:41.474231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.072 #31 NEW cov: 12544 ft: 14557 corp: 18/227b lim: 35 exec/s: 0 rss: 75Mb L: 10/22 MS: 1 ChangeASCIIInt- 00:12:04.330 [2024-11-26 18:08:41.534326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.330 [2024-11-26 18:08:41.534349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.330 #32 NEW cov: 12544 ft: 14597 corp: 19/238b lim: 35 exec/s: 32 rss: 75Mb L: 11/22 MS: 1 ChangeByte- 00:12:04.330 [2024-11-26 18:08:41.594530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.330 [2024-11-26 18:08:41.594555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.330 #33 NEW cov: 12544 ft: 14615 corp: 20/247b lim: 35 exec/s: 33 rss: 75Mb L: 9/22 MS: 1 ShuffleBytes- 00:12:04.330 [2024-11-26 18:08:41.634689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.330 [2024-11-26 18:08:41.634711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.330 #34 NEW cov: 12544 ft: 14656 corp: 21/256b lim: 35 exec/s: 34 rss: 75Mb L: 9/22 MS: 1 ShuffleBytes- 00:12:04.330 [2024-11-26 18:08:41.674797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.330 [2024-11-26 18:08:41.674818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.330 #35 NEW cov: 12544 ft: 14673 corp: 22/265b lim: 35 exec/s: 35 rss: 75Mb L: 9/22 MS: 1 ChangeByte- 00:12:04.330 [2024-11-26 18:08:41.734940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.330 [2024-11-26 18:08:41.734961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.330 #36 NEW cov: 12544 ft: 14680 corp: 23/274b lim: 35 exec/s: 36 rss: 75Mb L: 9/22 MS: 1 ChangeByte- 00:12:04.330 [2024-11-26 18:08:41.774998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.331 [2024-11-26 18:08:41.775020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.589 #37 NEW cov: 12544 ft: 14696 corp: 24/283b lim: 35 exec/s: 37 rss: 75Mb L: 9/22 MS: 1 ChangeBinInt- 00:12:04.589 [2024-11-26 18:08:41.835202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.835225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.589 #38 NEW cov: 12544 ft: 14697 corp: 25/296b lim: 35 exec/s: 38 rss: 75Mb L: 13/22 MS: 1 EraseBytes- 00:12:04.589 [2024-11-26 18:08:41.875389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.875411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.589 [2024-11-26 18:08:41.875484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.875495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.589 #39 NEW cov: 12544 ft: 14711 corp: 26/315b lim: 35 exec/s: 39 rss: 75Mb L: 19/22 MS: 1 InsertRepeatedBytes- 00:12:04.589 #40 NEW cov: 12544 ft: 14780 corp: 27/337b lim: 35 exec/s: 40 rss: 75Mb L: 22/22 MS: 1 ChangeASCIIInt- 00:12:04.589 [2024-11-26 18:08:41.976051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.976074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.589 [2024-11-26 18:08:41.976129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000036e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.976140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.589 [2024-11-26 18:08:41.976194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000036e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.976208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.589 [2024-11-26 18:08:41.976263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000036e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.589 [2024-11-26 18:08:41.976273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:04.589 #41 NEW cov: 12544 ft: 15254 corp: 28/371b lim: 35 exec/s: 41 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:12:04.847 [2024-11-26 18:08:42.035738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.035761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.847 #42 NEW cov: 12544 ft: 15270 corp: 29/381b lim: 35 exec/s: 42 rss: 75Mb L: 10/34 MS: 1 CrossOver- 00:12:04.847 [2024-11-26 18:08:42.076054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.076076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.847 [2024-11-26 18:08:42.076132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.076142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.847 #43 NEW cov: 12544 ft: 15297 corp: 30/396b lim: 35 exec/s: 43 rss: 75Mb L: 15/34 MS: 1 CMP- DE: "\377\001\000\000"- 00:12:04.847 [2024-11-26 18:08:42.136041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.136063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.847 #44 NEW cov: 12544 ft: 15310 corp: 31/409b lim: 35 exec/s: 44 rss: 75Mb L: 13/34 MS: 1 CrossOver- 00:12:04.847 [2024-11-26 18:08:42.176113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.176134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.847 #45 NEW cov: 12544 ft: 15331 corp: 32/422b lim: 35 exec/s: 45 rss: 75Mb L: 13/34 MS: 1 ChangeBinInt- 00:12:04.847 [2024-11-26 18:08:42.236355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.847 [2024-11-26 18:08:42.236382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.848 #46 NEW cov: 12544 ft: 15340 corp: 33/432b lim: 35 exec/s: 46 rss: 75Mb L: 10/34 MS: 1 EraseBytes- 00:12:04.848 [2024-11-26 18:08:42.276849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000246 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.848 [2024-11-26 18:08:42.276872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.848 [2024-11-26 18:08:42.276929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.848 [2024-11-26 18:08:42.276940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.848 [2024-11-26 18:08:42.276997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.848 [2024-11-26 18:08:42.277007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.106 #47 NEW cov: 12544 ft: 15456 corp: 34/457b lim: 35 exec/s: 47 rss: 75Mb L: 25/34 MS: 1 PersAutoDict- DE: "\377\204FV\206\031\011\322"- 00:12:05.106 [2024-11-26 18:08:42.336631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.106 [2024-11-26 18:08:42.336653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.106 #48 NEW cov: 12544 ft: 15500 corp: 35/467b lim: 35 exec/s: 48 rss: 75Mb L: 10/34 MS: 1 InsertByte- 00:12:05.106 [2024-11-26 18:08:42.376743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.106 [2024-11-26 18:08:42.376764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.106 #49 NEW cov: 12544 ft: 15511 corp: 36/476b lim: 35 exec/s: 49 rss: 76Mb L: 9/34 MS: 1 ChangeBinInt- 00:12:05.106 #50 NEW cov: 12544 ft: 15522 corp: 37/498b lim: 35 exec/s: 50 rss: 76Mb L: 22/34 MS: 1 ChangeASCIIInt- 00:12:05.106 [2024-11-26 18:08:42.477441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.106 [2024-11-26 18:08:42.477463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.106 [2024-11-26 18:08:42.477521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.106 [2024-11-26 18:08:42.477540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.106 [2024-11-26 18:08:42.477596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.107 [2024-11-26 18:08:42.477606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.107 [2024-11-26 18:08:42.477664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.107 [2024-11-26 18:08:42.477675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.107 #51 NEW cov: 12544 ft: 15536 corp: 38/530b lim: 35 exec/s: 51 rss: 76Mb L: 32/34 MS: 1 CrossOver- 00:12:05.107 [2024-11-26 18:08:42.517118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.107 [2024-11-26 18:08:42.517141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.366 #52 NEW cov: 12544 ft: 15543 corp: 39/543b lim: 35 exec/s: 26 rss: 76Mb L: 13/34 MS: 1 CrossOver- 00:12:05.366 #52 DONE cov: 12544 ft: 15543 corp: 39/543b lim: 35 exec/s: 26 rss: 76Mb 00:12:05.366 ###### Recommended dictionary. ###### 00:12:05.366 "\377\377\377\377\377\377\377\377" # Uses: 0 00:12:05.366 "\001\000\000\000\002WkI" # Uses: 0 00:12:05.366 "\377\204FV\206\031\011\322" # Uses: 1 00:12:05.366 "\377\001\000\000" # Uses: 0 00:12:05.366 ###### End of recommended dictionary. ###### 00:12:05.366 Done 52 runs in 2 second(s) 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:05.366 18:08:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:12:05.366 [2024-11-26 18:08:42.719248] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:05.366 [2024-11-26 18:08:42.719331] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293061 ] 00:12:05.625 [2024-11-26 18:08:42.938507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.626 [2024-11-26 18:08:42.979006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.626 [2024-11-26 18:08:43.041403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.626 [2024-11-26 18:08:43.057601] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:12:05.884 INFO: Running with entropic power schedule (0xFF, 100). 00:12:05.884 INFO: Seed: 1464698905 00:12:05.884 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:05.884 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:05.884 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:05.884 INFO: A corpus is not provided, starting from an empty corpus 00:12:05.884 #2 INITED exec/s: 0 rss: 65Mb 00:12:05.884 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:05.884 This may also happen if the target rejected all inputs we tried so far 00:12:05.884 [2024-11-26 18:08:43.102950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.884 [2024-11-26 18:08:43.102979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:05.884 NEW_FUNC[1/717]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:12:05.884 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:05.884 #18 NEW cov: 12357 ft: 12343 corp: 2/41b lim: 105 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:12:05.884 [2024-11-26 18:08:43.293421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:05.884 [2024-11-26 18:08:43.293449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 #19 NEW cov: 12471 ft: 12899 corp: 3/81b lim: 105 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:12:06.142 [2024-11-26 18:08:43.353664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.353692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.353755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.353775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.142 #25 NEW cov: 12477 ft: 13545 corp: 4/138b lim: 105 exec/s: 0 rss: 73Mb L: 57/57 MS: 1 CopyPart- 00:12:06.142 [2024-11-26 18:08:43.413834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.413858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.413921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.413943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.142 #27 NEW cov: 12562 ft: 13926 corp: 5/182b lim: 105 exec/s: 0 rss: 73Mb L: 44/57 MS: 2 CrossOver-CrossOver- 00:12:06.142 [2024-11-26 18:08:43.474106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.474130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.474176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.474189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.474245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.474258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:06.142 #33 NEW cov: 12562 ft: 14333 corp: 6/254b lim: 105 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 CopyPart- 00:12:06.142 [2024-11-26 18:08:43.513982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.514005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 #34 NEW cov: 12562 ft: 14502 corp: 7/294b lim: 105 exec/s: 0 rss: 73Mb L: 40/72 MS: 1 ChangeBinInt- 00:12:06.142 [2024-11-26 18:08:43.554480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.554507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.554554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.554567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.554620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.554631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:06.142 [2024-11-26 18:08:43.554683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.142 [2024-11-26 18:08:43.554695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:06.401 #35 NEW cov: 12562 ft: 15091 corp: 8/383b lim: 105 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 CopyPart- 00:12:06.401 [2024-11-26 18:08:43.614505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.614528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.401 [2024-11-26 18:08:43.614574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.614587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.401 [2024-11-26 18:08:43.614623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.614636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:06.401 #36 NEW cov: 12562 ft: 15208 corp: 9/449b lim: 105 exec/s: 0 rss: 73Mb L: 66/89 MS: 1 CrossOver- 00:12:06.401 [2024-11-26 18:08:43.674388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.674414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.401 #37 NEW cov: 12562 ft: 15227 corp: 10/489b lim: 105 exec/s: 0 rss: 73Mb L: 40/89 MS: 1 ChangeBit- 00:12:06.401 [2024-11-26 18:08:43.724528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9331882294333505921 len:33154 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.724552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.401 #39 NEW cov: 12562 ft: 15258 corp: 11/510b lim: 105 exec/s: 0 rss: 73Mb L: 21/89 MS: 2 InsertRepeatedBytes-InsertByte- 00:12:06.401 [2024-11-26 18:08:43.764677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9331882294333505921 len:32898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.764700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.401 #40 NEW cov: 12562 ft: 15305 corp: 12/531b lim: 105 exec/s: 0 rss: 73Mb L: 21/89 MS: 1 ChangeBit- 00:12:06.401 [2024-11-26 18:08:43.814781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.401 [2024-11-26 18:08:43.814804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.659 #41 NEW cov: 12562 ft: 15337 corp: 13/560b lim: 105 exec/s: 0 rss: 73Mb L: 29/89 MS: 1 EraseBytes- 00:12:06.660 [2024-11-26 18:08:43.874973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:43.874997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.660 #42 NEW cov: 12562 ft: 15366 corp: 14/601b lim: 105 exec/s: 0 rss: 73Mb L: 41/89 MS: 1 InsertByte- 00:12:06.660 [2024-11-26 18:08:43.935062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:43.935089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.660 #43 NEW cov: 12562 ft: 15374 corp: 15/630b lim: 105 exec/s: 0 rss: 73Mb L: 29/89 MS: 1 CopyPart- 00:12:06.660 [2024-11-26 18:08:43.985367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:43.985396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.660 [2024-11-26 18:08:43.985462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65520 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:43.985473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.660 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:06.660 #44 NEW cov: 12585 ft: 15414 corp: 16/672b lim: 105 exec/s: 0 rss: 74Mb L: 42/89 MS: 1 InsertByte- 00:12:06.660 [2024-11-26 18:08:44.045418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:44.045444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.660 #45 NEW cov: 12585 ft: 15428 corp: 17/713b lim: 105 exec/s: 0 rss: 74Mb L: 41/89 MS: 1 InsertByte- 00:12:06.660 [2024-11-26 18:08:44.085728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:44.085752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.660 [2024-11-26 18:08:44.085820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.660 [2024-11-26 18:08:44.085836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.918 #46 NEW cov: 12585 ft: 15475 corp: 18/757b lim: 105 exec/s: 46 rss: 74Mb L: 44/89 MS: 1 CMP- DE: "\017\000\000\000"- 00:12:06.918 [2024-11-26 18:08:44.125657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65296 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.125681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 #47 NEW cov: 12585 ft: 15487 corp: 19/797b lim: 105 exec/s: 47 rss: 74Mb L: 40/89 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:12:06.918 [2024-11-26 18:08:44.165914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.165937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 [2024-11-26 18:08:44.166004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.166014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.918 #48 NEW cov: 12585 ft: 15507 corp: 20/856b lim: 105 exec/s: 48 rss: 74Mb L: 59/89 MS: 1 InsertRepeatedBytes- 00:12:06.918 [2024-11-26 18:08:44.225951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.225975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 #49 NEW cov: 12585 ft: 15532 corp: 21/897b lim: 105 exec/s: 49 rss: 74Mb L: 41/89 MS: 1 InsertByte- 00:12:06.918 [2024-11-26 18:08:44.256026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.256051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 #50 NEW cov: 12585 ft: 15573 corp: 22/938b lim: 105 exec/s: 50 rss: 74Mb L: 41/89 MS: 1 CrossOver- 00:12:06.918 [2024-11-26 18:08:44.296419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.296447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 [2024-11-26 18:08:44.296516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.296532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:06.918 [2024-11-26 18:08:44.296592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.296607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:06.918 #51 NEW cov: 12585 ft: 15598 corp: 23/1005b lim: 105 exec/s: 51 rss: 74Mb L: 67/89 MS: 1 InsertByte- 00:12:06.918 [2024-11-26 18:08:44.356404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.356428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:06.918 [2024-11-26 18:08:44.356494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:06.918 [2024-11-26 18:08:44.356513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.176 #52 NEW cov: 12585 ft: 15613 corp: 24/1049b lim: 105 exec/s: 52 rss: 74Mb L: 44/89 MS: 1 CMP- DE: "\001\000\000\000"- 00:12:07.176 [2024-11-26 18:08:44.396558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.396582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.176 [2024-11-26 18:08:44.396653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.396666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.176 #53 NEW cov: 12585 ft: 15640 corp: 25/1107b lim: 105 exec/s: 53 rss: 74Mb L: 58/89 MS: 1 CopyPart- 00:12:07.176 [2024-11-26 18:08:44.436546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.436570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.176 #54 NEW cov: 12585 ft: 15655 corp: 26/1136b lim: 105 exec/s: 54 rss: 74Mb L: 29/89 MS: 1 CopyPart- 00:12:07.176 [2024-11-26 18:08:44.477029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.477052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.176 [2024-11-26 18:08:44.477119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.477132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.176 [2024-11-26 18:08:44.477179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.477192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:07.176 [2024-11-26 18:08:44.477239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.477251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:07.176 #55 NEW cov: 12585 ft: 15690 corp: 27/1240b lim: 105 exec/s: 55 rss: 74Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:12:07.176 [2024-11-26 18:08:44.536966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.536988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.176 [2024-11-26 18:08:44.537055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.537071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.176 #56 NEW cov: 12585 ft: 15700 corp: 28/1282b lim: 105 exec/s: 56 rss: 74Mb L: 42/104 MS: 1 ShuffleBytes- 00:12:07.176 [2024-11-26 18:08:44.596996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.176 [2024-11-26 18:08:44.597020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.176 #57 NEW cov: 12585 ft: 15708 corp: 29/1322b lim: 105 exec/s: 57 rss: 74Mb L: 40/104 MS: 1 ChangeBit- 00:12:07.434 [2024-11-26 18:08:44.637102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18374967958943301631 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.637125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.434 #58 NEW cov: 12585 ft: 15717 corp: 30/1363b lim: 105 exec/s: 58 rss: 74Mb L: 41/104 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:12:07.434 [2024-11-26 18:08:44.687629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.687652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.687711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.687730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.687773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.687786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.687837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:723401728380766730 len:2571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.687852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:07.434 #59 NEW cov: 12585 ft: 15734 corp: 31/1459b lim: 105 exec/s: 59 rss: 74Mb L: 96/104 MS: 1 InsertRepeatedBytes- 00:12:07.434 [2024-11-26 18:08:44.727665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.727688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.727751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.727767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.727818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.727830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:07.434 #60 NEW cov: 12585 ft: 15749 corp: 32/1540b lim: 105 exec/s: 60 rss: 74Mb L: 81/104 MS: 1 CrossOver- 00:12:07.434 [2024-11-26 18:08:44.767703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:12800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.767727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.767777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.767788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.434 #61 NEW cov: 12585 ft: 15758 corp: 33/1584b lim: 105 exec/s: 61 rss: 74Mb L: 44/104 MS: 1 ChangeByte- 00:12:07.434 [2024-11-26 18:08:44.828083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.828107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.828151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.828165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.828218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.828231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:07.434 [2024-11-26 18:08:44.828278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:723401728380766730 len:65291 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.434 [2024-11-26 18:08:44.828290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:07.434 #62 NEW cov: 12585 ft: 15759 corp: 34/1681b lim: 105 exec/s: 62 rss: 74Mb L: 97/104 MS: 1 CrossOver- 00:12:07.694 [2024-11-26 18:08:44.888264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.888288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.694 [2024-11-26 18:08:44.888336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.888348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.694 [2024-11-26 18:08:44.888427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.888442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:07.694 [2024-11-26 18:08:44.888492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.888505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:07.694 #63 NEW cov: 12585 ft: 15767 corp: 35/1766b lim: 105 exec/s: 63 rss: 74Mb L: 85/104 MS: 1 EraseBytes- 00:12:07.694 [2024-11-26 18:08:44.927984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.928008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.694 #65 NEW cov: 12585 ft: 15778 corp: 36/1796b lim: 105 exec/s: 65 rss: 74Mb L: 30/104 MS: 2 InsertByte-CrossOver- 00:12:07.694 [2024-11-26 18:08:44.958076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:44.958098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.694 #66 NEW cov: 12585 ft: 15786 corp: 37/1826b lim: 105 exec/s: 66 rss: 74Mb L: 30/104 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\013"- 00:12:07.694 [2024-11-26 18:08:45.008328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:45.008351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.694 [2024-11-26 18:08:45.008423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:45.008434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.694 #67 NEW cov: 12585 ft: 15796 corp: 38/1884b lim: 105 exec/s: 67 rss: 75Mb L: 58/104 MS: 1 ChangeByte- 00:12:07.694 [2024-11-26 18:08:45.068491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073575333887 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:45.068514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:07.694 [2024-11-26 18:08:45.068577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65520 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:07.694 [2024-11-26 18:08:45.068592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:07.694 #68 NEW cov: 12585 ft: 15815 corp: 39/1926b lim: 105 exec/s: 34 rss: 75Mb L: 42/104 MS: 1 ChangeBinInt- 00:12:07.694 #68 DONE cov: 12585 ft: 15815 corp: 39/1926b lim: 105 exec/s: 34 rss: 75Mb 00:12:07.694 ###### Recommended dictionary. ###### 00:12:07.694 "\017\000\000\000" # Uses: 1 00:12:07.694 "\001\000\000\000" # Uses: 1 00:12:07.694 "\000\000\000\000\000\000\000\013" # Uses: 0 00:12:07.694 ###### End of recommended dictionary. ###### 00:12:07.694 Done 68 runs in 2 second(s) 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:07.953 18:08:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:12:07.953 [2024-11-26 18:08:45.246676] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:07.953 [2024-11-26 18:08:45.246756] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293517 ] 00:12:08.212 [2024-11-26 18:08:45.449229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.212 [2024-11-26 18:08:45.489691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.212 [2024-11-26 18:08:45.552135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.212 [2024-11-26 18:08:45.568325] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:12:08.212 INFO: Running with entropic power schedule (0xFF, 100). 00:12:08.212 INFO: Seed: 3973712137 00:12:08.212 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:08.212 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:08.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:08.212 INFO: A corpus is not provided, starting from an empty corpus 00:12:08.212 #2 INITED exec/s: 0 rss: 66Mb 00:12:08.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:08.212 This may also happen if the target rejected all inputs we tried so far 00:12:08.212 [2024-11-26 18:08:45.637237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.212 [2024-11-26 18:08:45.637284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.212 [2024-11-26 18:08:45.637342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.212 [2024-11-26 18:08:45.637379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.212 [2024-11-26 18:08:45.637460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.212 [2024-11-26 18:08:45.637480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.470 NEW_FUNC[1/718]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:12:08.470 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:08.470 #35 NEW cov: 12379 ft: 12380 corp: 2/95b lim: 120 exec/s: 0 rss: 73Mb L: 94/94 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:12:08.470 [2024-11-26 18:08:45.867176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.470 [2024-11-26 18:08:45.867219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.470 #36 NEW cov: 12492 ft: 13837 corp: 3/120b lim: 120 exec/s: 0 rss: 73Mb L: 25/94 MS: 1 InsertRepeatedBytes- 00:12:08.728 [2024-11-26 18:08:45.938169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.938204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:45.938284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.938303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:45.938398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.938416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.728 #37 NEW cov: 12498 ft: 14014 corp: 4/200b lim: 120 exec/s: 0 rss: 73Mb L: 80/94 MS: 1 InsertRepeatedBytes- 00:12:08.728 [2024-11-26 18:08:45.998875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.998907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:45.999009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.999026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:45.999098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.999117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:45.999188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:45.999208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:08.728 #43 NEW cov: 12583 ft: 14672 corp: 5/296b lim: 120 exec/s: 0 rss: 73Mb L: 96/96 MS: 1 CopyPart- 00:12:08.728 [2024-11-26 18:08:46.088669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.088704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:46.088781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.088800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:46.088900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.088920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.728 #49 NEW cov: 12583 ft: 14917 corp: 6/391b lim: 120 exec/s: 0 rss: 73Mb L: 95/96 MS: 1 InsertByte- 00:12:08.728 [2024-11-26 18:08:46.149299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836035054270 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.149330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:46.149426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.149445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:46.149539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.728 [2024-11-26 18:08:46.149555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.728 [2024-11-26 18:08:46.149639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.729 [2024-11-26 18:08:46.149657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:08.986 #51 NEW cov: 12583 ft: 15047 corp: 7/489b lim: 120 exec/s: 0 rss: 73Mb L: 98/98 MS: 2 InsertByte-CrossOver- 00:12:08.986 [2024-11-26 18:08:46.209157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.209187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.209277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.209296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.209393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073457893375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.209409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.986 #52 NEW cov: 12583 ft: 15117 corp: 8/570b lim: 120 exec/s: 0 rss: 73Mb L: 81/98 MS: 1 InsertByte- 00:12:08.986 [2024-11-26 18:08:46.299836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.299868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.299943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.299963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.300044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.300058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.300155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.300173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:08.986 #53 NEW cov: 12583 ft: 15173 corp: 9/666b lim: 120 exec/s: 0 rss: 73Mb L: 96/98 MS: 1 CopyPart- 00:12:08.986 [2024-11-26 18:08:46.389836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.389866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.389963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.389980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:08.986 [2024-11-26 18:08:46.390078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.986 [2024-11-26 18:08:46.390098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:08.986 #54 NEW cov: 12583 ft: 15247 corp: 10/761b lim: 120 exec/s: 0 rss: 74Mb L: 95/98 MS: 1 ChangeByte- 00:12:09.242 [2024-11-26 18:08:46.449179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.449210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.242 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:09.242 #55 NEW cov: 12606 ft: 15285 corp: 11/786b lim: 120 exec/s: 0 rss: 74Mb L: 25/98 MS: 1 ChangeBinInt- 00:12:09.242 [2024-11-26 18:08:46.540355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.540394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.242 [2024-11-26 18:08:46.540480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.540502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.242 [2024-11-26 18:08:46.540572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.540591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.242 #56 NEW cov: 12606 ft: 15306 corp: 12/880b lim: 120 exec/s: 0 rss: 74Mb L: 94/98 MS: 1 CrossOver- 00:12:09.242 [2024-11-26 18:08:46.600507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.600541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.242 [2024-11-26 18:08:46.600615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.600633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.242 [2024-11-26 18:08:46.600725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.242 [2024-11-26 18:08:46.600744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.242 #57 NEW cov: 12606 ft: 15325 corp: 13/974b lim: 120 exec/s: 57 rss: 74Mb L: 94/98 MS: 1 ShuffleBytes- 00:12:09.499 [2024-11-26 18:08:46.690873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.690904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.690990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.691008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.691098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.691116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.499 #58 NEW cov: 12606 ft: 15361 corp: 14/1069b lim: 120 exec/s: 58 rss: 74Mb L: 95/98 MS: 1 ShuffleBytes- 00:12:09.499 [2024-11-26 18:08:46.751081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.751110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.751198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.751217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.751306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:105272848596480 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.751325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.499 #59 NEW cov: 12606 ft: 15381 corp: 15/1164b lim: 120 exec/s: 59 rss: 74Mb L: 95/98 MS: 1 ChangeBinInt- 00:12:09.499 [2024-11-26 18:08:46.841297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.841327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.841422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.841443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.841536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.841561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.499 #60 NEW cov: 12606 ft: 15405 corp: 16/1258b lim: 120 exec/s: 60 rss: 74Mb L: 94/98 MS: 1 ChangeBinInt- 00:12:09.499 [2024-11-26 18:08:46.901910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836035054270 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.901942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.902042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.902063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.902163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.902182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.499 [2024-11-26 18:08:46.902270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.499 [2024-11-26 18:08:46.902294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:09.755 #61 NEW cov: 12606 ft: 15484 corp: 17/1356b lim: 120 exec/s: 61 rss: 74Mb L: 98/98 MS: 1 ChangeBinInt- 00:12:09.755 [2024-11-26 18:08:46.991865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:46.991898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:46.991979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:46.992000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:46.992091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:105272848596480 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:46.992109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.755 #62 NEW cov: 12606 ft: 15503 corp: 18/1451b lim: 120 exec/s: 62 rss: 74Mb L: 95/98 MS: 1 ChangeBinInt- 00:12:09.755 [2024-11-26 18:08:47.082137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.082170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:47.082252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069431361339 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.082270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:47.082367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.082387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:09.755 #63 NEW cov: 12606 ft: 15523 corp: 19/1546b lim: 120 exec/s: 63 rss: 74Mb L: 95/98 MS: 1 InsertByte- 00:12:09.755 [2024-11-26 18:08:47.172478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446726481523507199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.172508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:47.172602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.172624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:09.755 [2024-11-26 18:08:47.172709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:09.755 [2024-11-26 18:08:47.172729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:10.011 #64 NEW cov: 12606 ft: 15568 corp: 20/1640b lim: 120 exec/s: 64 rss: 74Mb L: 94/98 MS: 1 ChangeBit- 00:12:10.011 [2024-11-26 18:08:47.263137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.263170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.011 [2024-11-26 18:08:47.263269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.263288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:10.011 [2024-11-26 18:08:47.263382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.263402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:10.011 [2024-11-26 18:08:47.263494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.263516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:10.011 #65 NEW cov: 12606 ft: 15603 corp: 21/1736b lim: 120 exec/s: 65 rss: 75Mb L: 96/98 MS: 1 ChangeBit- 00:12:10.011 [2024-11-26 18:08:47.353353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.353387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.011 [2024-11-26 18:08:47.353503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.353519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:10.011 [2024-11-26 18:08:47.353607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.011 [2024-11-26 18:08:47.353625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:10.012 [2024-11-26 18:08:47.353722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.012 [2024-11-26 18:08:47.353745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:10.012 #66 NEW cov: 12606 ft: 15633 corp: 22/1832b lim: 120 exec/s: 66 rss: 75Mb L: 96/98 MS: 1 ChangeBinInt- 00:12:10.012 [2024-11-26 18:08:47.443642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836035054270 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.012 [2024-11-26 18:08:47.443674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.012 [2024-11-26 18:08:47.443763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632840224423614 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.012 [2024-11-26 18:08:47.443781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:10.012 [2024-11-26 18:08:47.443846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.012 [2024-11-26 18:08:47.443867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:10.012 [2024-11-26 18:08:47.443932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.012 [2024-11-26 18:08:47.443953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:10.268 #67 NEW cov: 12606 ft: 15651 corp: 23/1930b lim: 120 exec/s: 67 rss: 75Mb L: 98/98 MS: 1 ChangeByte- 00:12:10.268 [2024-11-26 18:08:47.533639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13744632836214668990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.268 [2024-11-26 18:08:47.533670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.268 [2024-11-26 18:08:47.533756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.268 [2024-11-26 18:08:47.533776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:10.268 [2024-11-26 18:08:47.533870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.268 [2024-11-26 18:08:47.533888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:10.268 #68 NEW cov: 12606 ft: 15695 corp: 24/2024b lim: 120 exec/s: 68 rss: 75Mb L: 94/98 MS: 1 ChangeBit- 00:12:10.268 [2024-11-26 18:08:47.592958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:10.268 [2024-11-26 18:08:47.592989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.269 #69 NEW cov: 12606 ft: 15704 corp: 25/2049b lim: 120 exec/s: 34 rss: 75Mb L: 25/98 MS: 1 ShuffleBytes- 00:12:10.269 #69 DONE cov: 12606 ft: 15704 corp: 25/2049b lim: 120 exec/s: 34 rss: 75Mb 00:12:10.269 Done 69 runs in 2 second(s) 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:10.526 18:08:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:12:10.526 [2024-11-26 18:08:47.808643] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:10.526 [2024-11-26 18:08:47.808702] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294036 ] 00:12:10.784 [2024-11-26 18:08:48.005471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.784 [2024-11-26 18:08:48.045758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.784 [2024-11-26 18:08:48.108081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.784 [2024-11-26 18:08:48.124261] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:12:10.784 INFO: Running with entropic power schedule (0xFF, 100). 00:12:10.784 INFO: Seed: 2233727038 00:12:10.784 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:10.784 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:10.784 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:10.784 INFO: A corpus is not provided, starting from an empty corpus 00:12:10.784 #2 INITED exec/s: 0 rss: 66Mb 00:12:10.784 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:10.784 This may also happen if the target rejected all inputs we tried so far 00:12:10.784 [2024-11-26 18:08:48.173107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:10.784 [2024-11-26 18:08:48.173132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:10.784 [2024-11-26 18:08:48.173198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:10.784 [2024-11-26 18:08:48.173208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:10.784 [2024-11-26 18:08:48.173254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:10.784 [2024-11-26 18:08:48.173265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.042 NEW_FUNC[1/715]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:12:11.042 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:11.042 #12 NEW cov: 12304 ft: 12321 corp: 2/69b lim: 100 exec/s: 0 rss: 73Mb L: 68/68 MS: 5 ShuffleBytes-ChangeByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:12:11.042 [2024-11-26 18:08:48.323599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.042 [2024-11-26 18:08:48.323628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.323667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.042 [2024-11-26 18:08:48.323679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.323735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.042 [2024-11-26 18:08:48.323747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.042 NEW_FUNC[1/1]: 0x1c40288 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:595 00:12:11.042 #13 NEW cov: 12435 ft: 12775 corp: 3/137b lim: 100 exec/s: 0 rss: 73Mb L: 68/68 MS: 1 CopyPart- 00:12:11.042 [2024-11-26 18:08:48.383824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.042 [2024-11-26 18:08:48.383848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.383899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.042 [2024-11-26 18:08:48.383912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.383986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.042 [2024-11-26 18:08:48.383999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.384052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.042 [2024-11-26 18:08:48.384065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.042 #14 NEW cov: 12441 ft: 13304 corp: 4/218b lim: 100 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:12:11.042 [2024-11-26 18:08:48.443927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.042 [2024-11-26 18:08:48.443951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.443994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.042 [2024-11-26 18:08:48.444007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.444066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.042 [2024-11-26 18:08:48.444079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.042 [2024-11-26 18:08:48.444135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.042 [2024-11-26 18:08:48.444148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.042 #15 NEW cov: 12526 ft: 13716 corp: 5/299b lim: 100 exec/s: 0 rss: 74Mb L: 81/81 MS: 1 ChangeBit- 00:12:11.300 [2024-11-26 18:08:48.503986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.504010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.504068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.504078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.504129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.300 [2024-11-26 18:08:48.504142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.300 #16 NEW cov: 12526 ft: 13873 corp: 6/367b lim: 100 exec/s: 0 rss: 74Mb L: 68/81 MS: 1 ChangeBinInt- 00:12:11.300 [2024-11-26 18:08:48.543938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.543978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.544032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.544041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 #19 NEW cov: 12526 ft: 14191 corp: 7/421b lim: 100 exec/s: 0 rss: 74Mb L: 54/81 MS: 3 ChangeByte-ChangeASCIIInt-InsertRepeatedBytes- 00:12:11.300 [2024-11-26 18:08:48.584322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.584345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.584394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.584422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.584460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.300 [2024-11-26 18:08:48.584472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.584528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.300 [2024-11-26 18:08:48.584540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.300 #20 NEW cov: 12526 ft: 14315 corp: 8/507b lim: 100 exec/s: 0 rss: 74Mb L: 86/86 MS: 1 CopyPart- 00:12:11.300 [2024-11-26 18:08:48.644551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.644575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.644644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.644660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.644711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.300 [2024-11-26 18:08:48.644722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.644774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.300 [2024-11-26 18:08:48.644786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.300 #21 NEW cov: 12526 ft: 14371 corp: 9/593b lim: 100 exec/s: 0 rss: 74Mb L: 86/86 MS: 1 ChangeByte- 00:12:11.300 [2024-11-26 18:08:48.704854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.704879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.704935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.704948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.704994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.300 [2024-11-26 18:08:48.705006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.705058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.300 [2024-11-26 18:08:48.705071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.705125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:11.300 [2024-11-26 18:08:48.705137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:11.300 #22 NEW cov: 12526 ft: 14429 corp: 10/693b lim: 100 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:12:11.300 [2024-11-26 18:08:48.744850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.300 [2024-11-26 18:08:48.744873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.744918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.300 [2024-11-26 18:08:48.744931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.744981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.300 [2024-11-26 18:08:48.744993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.300 [2024-11-26 18:08:48.745044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.300 [2024-11-26 18:08:48.745056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.559 #23 NEW cov: 12526 ft: 14475 corp: 11/786b lim: 100 exec/s: 0 rss: 74Mb L: 93/100 MS: 1 InsertRepeatedBytes- 00:12:11.559 [2024-11-26 18:08:48.784771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.559 [2024-11-26 18:08:48.784793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.784841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.559 [2024-11-26 18:08:48.784855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.784887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.559 [2024-11-26 18:08:48.784900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.559 #24 NEW cov: 12526 ft: 14508 corp: 12/855b lim: 100 exec/s: 0 rss: 74Mb L: 69/100 MS: 1 InsertByte- 00:12:11.559 [2024-11-26 18:08:48.824755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.559 [2024-11-26 18:08:48.824777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.824828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.559 [2024-11-26 18:08:48.824841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.559 #25 NEW cov: 12526 ft: 14571 corp: 13/899b lim: 100 exec/s: 0 rss: 74Mb L: 44/100 MS: 1 CrossOver- 00:12:11.559 [2024-11-26 18:08:48.865041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.559 [2024-11-26 18:08:48.865064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.865110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.559 [2024-11-26 18:08:48.865123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.865174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.559 [2024-11-26 18:08:48.865185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.559 #26 NEW cov: 12526 ft: 14656 corp: 14/968b lim: 100 exec/s: 0 rss: 74Mb L: 69/100 MS: 1 InsertByte- 00:12:11.559 [2024-11-26 18:08:48.905152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.559 [2024-11-26 18:08:48.905174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.905235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.559 [2024-11-26 18:08:48.905248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.905287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.559 [2024-11-26 18:08:48.905300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.559 #27 NEW cov: 12526 ft: 14675 corp: 15/1036b lim: 100 exec/s: 0 rss: 74Mb L: 68/100 MS: 1 CMP- DE: "\000\205F[\200\036Up"- 00:12:11.559 [2024-11-26 18:08:48.945270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.559 [2024-11-26 18:08:48.945293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.945340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.559 [2024-11-26 18:08:48.945351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.559 [2024-11-26 18:08:48.945379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.559 [2024-11-26 18:08:48.945391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.559 #28 NEW cov: 12526 ft: 14682 corp: 16/1113b lim: 100 exec/s: 0 rss: 74Mb L: 77/100 MS: 1 CMP- DE: "\377\204F[\205\201o\272"- 00:12:11.817 [2024-11-26 18:08:49.005722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.817 [2024-11-26 18:08:49.005746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.817 [2024-11-26 18:08:49.005796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.817 [2024-11-26 18:08:49.005808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.817 [2024-11-26 18:08:49.005834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.817 [2024-11-26 18:08:49.005847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.817 [2024-11-26 18:08:49.005898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.817 [2024-11-26 18:08:49.005915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.817 [2024-11-26 18:08:49.005972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:11.817 [2024-11-26 18:08:49.005984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:11.817 #29 NEW cov: 12526 ft: 14723 corp: 17/1213b lim: 100 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:12:11.817 [2024-11-26 18:08:49.045660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.818 [2024-11-26 18:08:49.045683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.045730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.818 [2024-11-26 18:08:49.045742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.045765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.818 [2024-11-26 18:08:49.045777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.045830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.818 [2024-11-26 18:08:49.045843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.818 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:11.818 #30 NEW cov: 12549 ft: 14761 corp: 18/1306b lim: 100 exec/s: 0 rss: 74Mb L: 93/100 MS: 1 CopyPart- 00:12:11.818 [2024-11-26 18:08:49.105826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.818 [2024-11-26 18:08:49.105850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.105921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.818 [2024-11-26 18:08:49.105935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.105988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.818 [2024-11-26 18:08:49.106000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.106055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.818 [2024-11-26 18:08:49.106067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.818 #31 NEW cov: 12549 ft: 14770 corp: 19/1405b lim: 100 exec/s: 0 rss: 74Mb L: 99/100 MS: 1 CopyPart- 00:12:11.818 [2024-11-26 18:08:49.165869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.818 [2024-11-26 18:08:49.165892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.165958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.818 [2024-11-26 18:08:49.165973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.166028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.818 [2024-11-26 18:08:49.166041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.818 #32 NEW cov: 12549 ft: 14837 corp: 20/1473b lim: 100 exec/s: 32 rss: 74Mb L: 68/100 MS: 1 CopyPart- 00:12:11.818 [2024-11-26 18:08:49.206117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:11.818 [2024-11-26 18:08:49.206140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.206207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:11.818 [2024-11-26 18:08:49.206224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.206274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:11.818 [2024-11-26 18:08:49.206287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:11.818 [2024-11-26 18:08:49.206338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:11.818 [2024-11-26 18:08:49.206351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:11.818 #33 NEW cov: 12549 ft: 14855 corp: 21/1554b lim: 100 exec/s: 33 rss: 74Mb L: 81/100 MS: 1 CrossOver- 00:12:12.075 [2024-11-26 18:08:49.266194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.075 [2024-11-26 18:08:49.266217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.266265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.075 [2024-11-26 18:08:49.266277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.266312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.075 [2024-11-26 18:08:49.266325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.075 #34 NEW cov: 12549 ft: 14888 corp: 22/1622b lim: 100 exec/s: 34 rss: 74Mb L: 68/100 MS: 1 ChangeBinInt- 00:12:12.075 [2024-11-26 18:08:49.306403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.075 [2024-11-26 18:08:49.306426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.306473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.075 [2024-11-26 18:08:49.306486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.306539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.075 [2024-11-26 18:08:49.306550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.306602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.075 [2024-11-26 18:08:49.306614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.075 #38 NEW cov: 12549 ft: 14903 corp: 23/1706b lim: 100 exec/s: 38 rss: 74Mb L: 84/100 MS: 4 InsertRepeatedBytes-EraseBytes-InsertByte-InsertRepeatedBytes- 00:12:12.075 [2024-11-26 18:08:49.346249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.075 [2024-11-26 18:08:49.346272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.075 [2024-11-26 18:08:49.346323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.075 [2024-11-26 18:08:49.346336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.075 #39 NEW cov: 12549 ft: 14910 corp: 24/1760b lim: 100 exec/s: 39 rss: 74Mb L: 54/100 MS: 1 ChangeBit- 00:12:12.075 [2024-11-26 18:08:49.406567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.076 [2024-11-26 18:08:49.406591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.406641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.076 [2024-11-26 18:08:49.406654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.406704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.076 [2024-11-26 18:08:49.406717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.076 #42 NEW cov: 12549 ft: 14938 corp: 25/1830b lim: 100 exec/s: 42 rss: 74Mb L: 70/100 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:12:12.076 [2024-11-26 18:08:49.446975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.076 [2024-11-26 18:08:49.446998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.447062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.076 [2024-11-26 18:08:49.447080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.447124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.076 [2024-11-26 18:08:49.447136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.447190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.076 [2024-11-26 18:08:49.447202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.447256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:12.076 [2024-11-26 18:08:49.447269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:12.076 #43 NEW cov: 12549 ft: 14961 corp: 26/1930b lim: 100 exec/s: 43 rss: 75Mb L: 100/100 MS: 1 ChangeByte- 00:12:12.076 [2024-11-26 18:08:49.507126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.076 [2024-11-26 18:08:49.507149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.507199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.076 [2024-11-26 18:08:49.507211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.507242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.076 [2024-11-26 18:08:49.507254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.507306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.076 [2024-11-26 18:08:49.507320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.076 [2024-11-26 18:08:49.507375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:12.076 [2024-11-26 18:08:49.507389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:12.333 #44 NEW cov: 12549 ft: 14985 corp: 27/2030b lim: 100 exec/s: 44 rss: 75Mb L: 100/100 MS: 1 ChangeByte- 00:12:12.333 [2024-11-26 18:08:49.567120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.333 [2024-11-26 18:08:49.567143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.333 [2024-11-26 18:08:49.567189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.333 [2024-11-26 18:08:49.567201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.333 [2024-11-26 18:08:49.567259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.333 [2024-11-26 18:08:49.567272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.333 [2024-11-26 18:08:49.567323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.333 [2024-11-26 18:08:49.567335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.333 #45 NEW cov: 12549 ft: 14987 corp: 28/2117b lim: 100 exec/s: 45 rss: 75Mb L: 87/100 MS: 1 InsertRepeatedBytes- 00:12:12.333 [2024-11-26 18:08:49.627161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.333 [2024-11-26 18:08:49.627184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.333 [2024-11-26 18:08:49.627234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.333 [2024-11-26 18:08:49.627247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.333 [2024-11-26 18:08:49.627296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.333 [2024-11-26 18:08:49.627309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.334 #46 NEW cov: 12549 ft: 14993 corp: 29/2186b lim: 100 exec/s: 46 rss: 75Mb L: 69/100 MS: 1 InsertByte- 00:12:12.334 [2024-11-26 18:08:49.667560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.334 [2024-11-26 18:08:49.667582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.667632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.334 [2024-11-26 18:08:49.667645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.667673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.334 [2024-11-26 18:08:49.667685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.667738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.334 [2024-11-26 18:08:49.667750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.667802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:12.334 [2024-11-26 18:08:49.667814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:12.334 #47 NEW cov: 12549 ft: 15028 corp: 30/2286b lim: 100 exec/s: 47 rss: 75Mb L: 100/100 MS: 1 CopyPart- 00:12:12.334 [2024-11-26 18:08:49.727591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.334 [2024-11-26 18:08:49.727615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.727684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.334 [2024-11-26 18:08:49.727695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.727746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.334 [2024-11-26 18:08:49.727758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.727812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.334 [2024-11-26 18:08:49.727824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.334 #48 NEW cov: 12549 ft: 15080 corp: 31/2384b lim: 100 exec/s: 48 rss: 75Mb L: 98/100 MS: 1 InsertRepeatedBytes- 00:12:12.334 [2024-11-26 18:08:49.767861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.334 [2024-11-26 18:08:49.767886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.767933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.334 [2024-11-26 18:08:49.767946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.767968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.334 [2024-11-26 18:08:49.767981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.768033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.334 [2024-11-26 18:08:49.768045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.334 [2024-11-26 18:08:49.768099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:12:12.334 [2024-11-26 18:08:49.768112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:12.592 #49 NEW cov: 12549 ft: 15091 corp: 32/2484b lim: 100 exec/s: 49 rss: 75Mb L: 100/100 MS: 1 ShuffleBytes- 00:12:12.592 [2024-11-26 18:08:49.807609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.592 [2024-11-26 18:08:49.807632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.807686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.592 [2024-11-26 18:08:49.807695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.592 #50 NEW cov: 12549 ft: 15106 corp: 33/2535b lim: 100 exec/s: 50 rss: 75Mb L: 51/100 MS: 1 EraseBytes- 00:12:12.592 [2024-11-26 18:08:49.868012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.592 [2024-11-26 18:08:49.868035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.868083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.592 [2024-11-26 18:08:49.868095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.868116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.592 [2024-11-26 18:08:49.868128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.868183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.592 [2024-11-26 18:08:49.868196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.592 #51 NEW cov: 12549 ft: 15117 corp: 34/2617b lim: 100 exec/s: 51 rss: 75Mb L: 82/100 MS: 1 InsertByte- 00:12:12.592 [2024-11-26 18:08:49.927877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.592 [2024-11-26 18:08:49.927900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.927953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.592 [2024-11-26 18:08:49.927962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.592 #52 NEW cov: 12549 ft: 15133 corp: 35/2668b lim: 100 exec/s: 52 rss: 75Mb L: 51/100 MS: 1 EraseBytes- 00:12:12.592 [2024-11-26 18:08:49.988211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.592 [2024-11-26 18:08:49.988235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.988305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.592 [2024-11-26 18:08:49.988315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.592 [2024-11-26 18:08:49.988370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.592 [2024-11-26 18:08:49.988393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.592 #53 NEW cov: 12549 ft: 15139 corp: 36/2732b lim: 100 exec/s: 53 rss: 75Mb L: 64/100 MS: 1 EraseBytes- 00:12:12.851 [2024-11-26 18:08:50.048602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.851 [2024-11-26 18:08:50.048629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.851 [2024-11-26 18:08:50.048680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.851 [2024-11-26 18:08:50.048691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.851 [2024-11-26 18:08:50.048744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:12.851 [2024-11-26 18:08:50.048756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:12.851 [2024-11-26 18:08:50.048809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:12.851 [2024-11-26 18:08:50.048822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:12.851 #54 NEW cov: 12549 ft: 15184 corp: 37/2831b lim: 100 exec/s: 54 rss: 75Mb L: 99/100 MS: 1 PersAutoDict- DE: "\000\205F[\200\036Up"- 00:12:12.851 [2024-11-26 18:08:50.140660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:12.851 [2024-11-26 18:08:50.140704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:12.851 [2024-11-26 18:08:50.140810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:12.851 [2024-11-26 18:08:50.140829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:12.851 #55 NEW cov: 12549 ft: 15337 corp: 38/2886b lim: 100 exec/s: 27 rss: 75Mb L: 55/100 MS: 1 EraseBytes- 00:12:12.851 #55 DONE cov: 12549 ft: 15337 corp: 38/2886b lim: 100 exec/s: 27 rss: 75Mb 00:12:12.851 ###### Recommended dictionary. ###### 00:12:12.851 "\000\205F[\200\036Up" # Uses: 1 00:12:12.851 "\377\204F[\205\201o\272" # Uses: 0 00:12:12.851 ###### End of recommended dictionary. ###### 00:12:12.851 Done 55 runs in 2 second(s) 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:13.109 18:08:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:12:13.109 [2024-11-26 18:08:50.355545] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:13.109 [2024-11-26 18:08:50.355604] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294549 ] 00:12:13.109 [2024-11-26 18:08:50.551316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.367 [2024-11-26 18:08:50.591720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.367 [2024-11-26 18:08:50.654041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.367 [2024-11-26 18:08:50.670218] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:12:13.367 INFO: Running with entropic power schedule (0xFF, 100). 00:12:13.367 INFO: Seed: 486761653 00:12:13.367 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:13.367 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:13.367 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:13.367 INFO: A corpus is not provided, starting from an empty corpus 00:12:13.367 #2 INITED exec/s: 0 rss: 65Mb 00:12:13.367 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:13.367 This may also happen if the target rejected all inputs we tried so far 00:12:13.367 [2024-11-26 18:08:50.715702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:13.367 [2024-11-26 18:08:50.715732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.367 [2024-11-26 18:08:50.715783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.367 [2024-11-26 18:08:50.715795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.626 NEW_FUNC[1/716]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:12:13.626 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:13.626 #4 NEW cov: 12298 ft: 12297 corp: 2/23b lim: 50 exec/s: 0 rss: 72Mb L: 22/22 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:12:13.626 [2024-11-26 18:08:50.916312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551400 len:65536 00:12:13.626 [2024-11-26 18:08:50.916344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.626 [2024-11-26 18:08:50.916397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.626 [2024-11-26 18:08:50.916411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.626 #5 NEW cov: 12413 ft: 12712 corp: 3/45b lim: 50 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeByte- 00:12:13.626 [2024-11-26 18:08:50.976388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:13.626 [2024-11-26 18:08:50.976412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.626 [2024-11-26 18:08:50.976483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.626 [2024-11-26 18:08:50.976496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.626 #6 NEW cov: 12419 ft: 13156 corp: 4/71b lim: 50 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 CopyPart- 00:12:13.626 [2024-11-26 18:08:51.016480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:13.626 [2024-11-26 18:08:51.016504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.626 [2024-11-26 18:08:51.016571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.626 [2024-11-26 18:08:51.016585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.626 #7 NEW cov: 12504 ft: 13378 corp: 5/97b lim: 50 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBinInt- 00:12:13.884 [2024-11-26 18:08:51.076644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:13.884 [2024-11-26 18:08:51.076669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.884 [2024-11-26 18:08:51.076722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.884 [2024-11-26 18:08:51.076743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.884 #8 NEW cov: 12504 ft: 13519 corp: 6/123b lim: 50 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:12:13.884 [2024-11-26 18:08:51.136816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:13.884 [2024-11-26 18:08:51.136844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.884 [2024-11-26 18:08:51.136915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.884 [2024-11-26 18:08:51.136926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.884 #9 NEW cov: 12504 ft: 13643 corp: 7/149b lim: 50 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:12:13.884 [2024-11-26 18:08:51.196977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744039349813032 len:65536 00:12:13.884 [2024-11-26 18:08:51.197003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.884 [2024-11-26 18:08:51.197058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:13.884 [2024-11-26 18:08:51.197080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.884 #10 NEW cov: 12504 ft: 13775 corp: 8/171b lim: 50 exec/s: 0 rss: 73Mb L: 22/26 MS: 1 ChangeBit- 00:12:13.884 [2024-11-26 18:08:51.257270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551400 len:65536 00:12:13.884 [2024-11-26 18:08:51.257295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:13.884 [2024-11-26 18:08:51.257344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:57569 00:12:13.884 [2024-11-26 18:08:51.257355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:13.884 [2024-11-26 18:08:51.257407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:12:13.884 [2024-11-26 18:08:51.257435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:13.884 #11 NEW cov: 12504 ft: 14125 corp: 9/208b lim: 50 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:12:13.884 [2024-11-26 18:08:51.297141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744039349813032 len:65536 00:12:13.884 [2024-11-26 18:08:51.297166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 #12 NEW cov: 12504 ft: 14459 corp: 10/226b lim: 50 exec/s: 0 rss: 73Mb L: 18/37 MS: 1 EraseBytes- 00:12:14.142 [2024-11-26 18:08:51.357461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:14.142 [2024-11-26 18:08:51.357486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.357536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:2049 00:12:14.142 [2024-11-26 18:08:51.357546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.142 #13 NEW cov: 12504 ft: 14563 corp: 11/252b lim: 50 exec/s: 0 rss: 73Mb L: 26/37 MS: 1 ChangeBinInt- 00:12:14.142 [2024-11-26 18:08:51.417633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.142 [2024-11-26 18:08:51.417658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.417725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414584342 len:65536 00:12:14.142 [2024-11-26 18:08:51.417740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.142 #14 NEW cov: 12504 ft: 14578 corp: 12/274b lim: 50 exec/s: 0 rss: 74Mb L: 22/37 MS: 1 ChangeBinInt- 00:12:14.142 [2024-11-26 18:08:51.457722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:14.142 [2024-11-26 18:08:51.457746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.457811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.142 [2024-11-26 18:08:51.457824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.142 #15 NEW cov: 12504 ft: 14605 corp: 13/300b lim: 50 exec/s: 0 rss: 74Mb L: 26/37 MS: 1 ChangeBit- 00:12:14.142 [2024-11-26 18:08:51.497835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.142 [2024-11-26 18:08:51.497859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.497929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:14.142 [2024-11-26 18:08:51.497941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.142 #16 NEW cov: 12504 ft: 14635 corp: 14/326b lim: 50 exec/s: 0 rss: 74Mb L: 26/37 MS: 1 ChangeBinInt- 00:12:14.142 [2024-11-26 18:08:51.538203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.142 [2024-11-26 18:08:51.538228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.538279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:14.142 [2024-11-26 18:08:51.538292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.538340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18437984131427074047 len:57569 00:12:14.142 [2024-11-26 18:08:51.538353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.142 [2024-11-26 18:08:51.538424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:12:14.142 [2024-11-26 18:08:51.538437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.142 #17 NEW cov: 12504 ft: 14907 corp: 15/373b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 CrossOver- 00:12:14.404 [2024-11-26 18:08:51.598148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.404 [2024-11-26 18:08:51.598174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.598226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.404 [2024-11-26 18:08:51.598236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.404 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:14.404 #18 NEW cov: 12527 ft: 14963 corp: 16/395b lim: 50 exec/s: 0 rss: 74Mb L: 22/47 MS: 1 CopyPart- 00:12:14.404 [2024-11-26 18:08:51.638492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709550847 len:65536 00:12:14.404 [2024-11-26 18:08:51.638517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.638581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:14.404 [2024-11-26 18:08:51.638598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.638646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18437984131427074047 len:57569 00:12:14.404 [2024-11-26 18:08:51.638658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.638709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:12:14.404 [2024-11-26 18:08:51.638722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.404 #19 NEW cov: 12527 ft: 15019 corp: 17/442b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 ChangeBinInt- 00:12:14.404 [2024-11-26 18:08:51.698308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6655295899727322204 len:23645 00:12:14.404 [2024-11-26 18:08:51.698334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 #21 NEW cov: 12527 ft: 15039 corp: 18/456b lim: 50 exec/s: 21 rss: 74Mb L: 14/47 MS: 2 CopyPart-InsertRepeatedBytes- 00:12:14.404 [2024-11-26 18:08:51.738813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709550847 len:65536 00:12:14.404 [2024-11-26 18:08:51.738838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.738903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:14.404 [2024-11-26 18:08:51.738919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.738971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18437984131427074047 len:57569 00:12:14.404 [2024-11-26 18:08:51.738983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.739035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204197766541402336 len:57569 00:12:14.404 [2024-11-26 18:08:51.739047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.404 #22 NEW cov: 12527 ft: 15041 corp: 19/503b lim: 50 exec/s: 22 rss: 74Mb L: 47/47 MS: 1 ChangeByte- 00:12:14.404 [2024-11-26 18:08:51.798741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:14.404 [2024-11-26 18:08:51.798766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.798814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.404 [2024-11-26 18:08:51.798824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.404 #23 NEW cov: 12527 ft: 15056 corp: 20/529b lim: 50 exec/s: 23 rss: 74Mb L: 26/47 MS: 1 CopyPart- 00:12:14.404 [2024-11-26 18:08:51.838854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.404 [2024-11-26 18:08:51.838880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.404 [2024-11-26 18:08:51.838951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.404 [2024-11-26 18:08:51.838963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 #24 NEW cov: 12527 ft: 15134 corp: 21/555b lim: 50 exec/s: 24 rss: 74Mb L: 26/47 MS: 1 CrossOver- 00:12:14.687 [2024-11-26 18:08:51.899296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:14.687 [2024-11-26 18:08:51.899321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.899365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:768 00:12:14.687 [2024-11-26 18:08:51.899383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.899406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:12:14.687 [2024-11-26 18:08:51.899419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.899469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:12:14.687 [2024-11-26 18:08:51.899481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.687 #25 NEW cov: 12527 ft: 15187 corp: 22/599b lim: 50 exec/s: 25 rss: 74Mb L: 44/47 MS: 1 CrossOver- 00:12:14.687 [2024-11-26 18:08:51.939221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744039349813032 len:65536 00:12:14.687 [2024-11-26 18:08:51.939244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.939310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2336242304522232851 len:65536 00:12:14.687 [2024-11-26 18:08:51.939325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.939384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65291 00:12:14.687 [2024-11-26 18:08:51.939397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.687 #26 NEW cov: 12527 ft: 15188 corp: 23/629b lim: 50 exec/s: 26 rss: 74Mb L: 30/47 MS: 1 CMP- DE: "\377\377~I\244\023 k"- 00:12:14.687 [2024-11-26 18:08:51.979204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.687 [2024-11-26 18:08:51.979228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:51.979298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057589742960704 len:65536 00:12:14.687 [2024-11-26 18:08:51.979311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 #27 NEW cov: 12527 ft: 15192 corp: 24/651b lim: 50 exec/s: 27 rss: 74Mb L: 22/47 MS: 1 CMP- DE: "\000\000@\000"- 00:12:14.687 [2024-11-26 18:08:52.039396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744039349813032 len:65536 00:12:14.687 [2024-11-26 18:08:52.039423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:52.039474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2449958197289549823 len:65536 00:12:14.687 [2024-11-26 18:08:52.039484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 #28 NEW cov: 12527 ft: 15194 corp: 25/673b lim: 50 exec/s: 28 rss: 74Mb L: 22/47 MS: 1 ChangeByte- 00:12:14.687 [2024-11-26 18:08:52.079404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.687 [2024-11-26 18:08:52.079428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 #29 NEW cov: 12527 ft: 15220 corp: 26/684b lim: 50 exec/s: 29 rss: 74Mb L: 11/47 MS: 1 EraseBytes- 00:12:14.687 [2024-11-26 18:08:52.119764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551400 len:65536 00:12:14.687 [2024-11-26 18:08:52.119788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:52.119835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:32330 00:12:14.687 [2024-11-26 18:08:52.119849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.687 [2024-11-26 18:08:52.119907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198714709057643 len:57569 00:12:14.687 [2024-11-26 18:08:52.119920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.971 #30 NEW cov: 12527 ft: 15258 corp: 27/721b lim: 50 exec/s: 30 rss: 74Mb L: 37/47 MS: 1 PersAutoDict- DE: "\377\377~I\244\023 k"- 00:12:14.971 [2024-11-26 18:08:52.179850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743236190928898 len:65536 00:12:14.971 [2024-11-26 18:08:52.179875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.179942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.179953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.971 #31 NEW cov: 12527 ft: 15270 corp: 28/747b lim: 50 exec/s: 31 rss: 74Mb L: 26/47 MS: 1 ChangeByte- 00:12:14.971 [2024-11-26 18:08:52.220207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:14.971 [2024-11-26 18:08:52.220232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.220278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:768 00:12:14.971 [2024-11-26 18:08:52.220292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.220315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.220327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.220377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.220390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.971 #32 NEW cov: 12527 ft: 15278 corp: 29/791b lim: 50 exec/s: 32 rss: 74Mb L: 44/47 MS: 1 ShuffleBytes- 00:12:14.971 [2024-11-26 18:08:52.280103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.280127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.280192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.280204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.971 #33 NEW cov: 12527 ft: 15290 corp: 30/817b lim: 50 exec/s: 33 rss: 74Mb L: 26/47 MS: 1 ChangeBinInt- 00:12:14.971 [2024-11-26 18:08:52.320506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709550847 len:65536 00:12:14.971 [2024-11-26 18:08:52.320530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.320596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:14.971 [2024-11-26 18:08:52.320612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.320662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18437984131427074047 len:57569 00:12:14.971 [2024-11-26 18:08:52.320675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.320725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204086565543141600 len:57569 00:12:14.971 [2024-11-26 18:08:52.320738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:14.971 #34 NEW cov: 12527 ft: 15340 corp: 31/865b lim: 50 exec/s: 34 rss: 74Mb L: 48/48 MS: 1 InsertByte- 00:12:14.971 [2024-11-26 18:08:52.360343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743236190928898 len:65536 00:12:14.971 [2024-11-26 18:08:52.360366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:14.971 [2024-11-26 18:08:52.360448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:14.971 [2024-11-26 18:08:52.360461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:14.971 #35 NEW cov: 12527 ft: 15342 corp: 32/891b lim: 50 exec/s: 35 rss: 75Mb L: 26/48 MS: 1 ShuffleBytes- 00:12:15.259 [2024-11-26 18:08:52.420818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446683364346822440 len:51401 00:12:15.259 [2024-11-26 18:08:52.420844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.420887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:12:15.259 [2024-11-26 18:08:52.420901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.420923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:12:15.259 [2024-11-26 18:08:52.420936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.420989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18444492272969500872 len:65536 00:12:15.259 [2024-11-26 18:08:52.421003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:15.259 #36 NEW cov: 12527 ft: 15349 corp: 33/937b lim: 50 exec/s: 36 rss: 75Mb L: 46/48 MS: 1 InsertRepeatedBytes- 00:12:15.259 [2024-11-26 18:08:52.480871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:12:15.259 [2024-11-26 18:08:52.480895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.480959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251666669633536 len:57569 00:12:15.259 [2024-11-26 18:08:52.480976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.481027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:12:15.259 [2024-11-26 18:08:52.481040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:15.259 #37 NEW cov: 12527 ft: 15382 corp: 34/974b lim: 50 exec/s: 37 rss: 75Mb L: 37/48 MS: 1 EraseBytes- 00:12:15.259 [2024-11-26 18:08:52.520715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069431361535 len:65536 00:12:15.259 [2024-11-26 18:08:52.520740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.259 #38 NEW cov: 12527 ft: 15390 corp: 35/992b lim: 50 exec/s: 38 rss: 75Mb L: 18/48 MS: 1 EraseBytes- 00:12:15.259 [2024-11-26 18:08:52.560941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069416288258 len:65536 00:12:15.259 [2024-11-26 18:08:52.560965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.259 [2024-11-26 18:08:52.561034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:12:15.259 [2024-11-26 18:08:52.561046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:15.259 #39 NEW cov: 12527 ft: 15395 corp: 36/1018b lim: 50 exec/s: 39 rss: 75Mb L: 26/48 MS: 1 ChangeBinInt- 00:12:15.259 [2024-11-26 18:08:52.601313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709550847 len:65536 00:12:15.259 [2024-11-26 18:08:52.601337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.601383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2251799813619712 len:65536 00:12:15.260 [2024-11-26 18:08:52.601397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.601435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18437984131427074047 len:57569 00:12:15.260 [2024-11-26 18:08:52.601448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.601514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198277642510560 len:57569 00:12:15.260 [2024-11-26 18:08:52.601527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:15.260 #40 NEW cov: 12527 ft: 15403 corp: 37/1066b lim: 50 exec/s: 40 rss: 75Mb L: 48/48 MS: 1 ShuffleBytes- 00:12:15.260 [2024-11-26 18:08:52.661471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584322 len:65536 00:12:15.260 [2024-11-26 18:08:52.661496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.661563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:768 00:12:15.260 [2024-11-26 18:08:52.661579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.661626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:12:15.260 [2024-11-26 18:08:52.661639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:15.260 [2024-11-26 18:08:52.661691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:12:15.260 [2024-11-26 18:08:52.661705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:15.531 #41 NEW cov: 12527 ft: 15434 corp: 38/1110b lim: 50 exec/s: 20 rss: 75Mb L: 44/48 MS: 1 ShuffleBytes- 00:12:15.531 #41 DONE cov: 12527 ft: 15434 corp: 38/1110b lim: 50 exec/s: 20 rss: 75Mb 00:12:15.531 ###### Recommended dictionary. ###### 00:12:15.531 "\377\377~I\244\023 k" # Uses: 1 00:12:15.531 "\000\000@\000" # Uses: 0 00:12:15.531 ###### End of recommended dictionary. ###### 00:12:15.531 Done 41 runs in 2 second(s) 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:15.531 18:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:12:15.531 [2024-11-26 18:08:52.844430] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:15.531 [2024-11-26 18:08:52.844481] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294952 ] 00:12:15.791 [2024-11-26 18:08:53.052379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.791 [2024-11-26 18:08:53.092670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.791 [2024-11-26 18:08:53.155040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.791 [2024-11-26 18:08:53.171228] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:15.791 INFO: Running with entropic power schedule (0xFF, 100). 00:12:15.791 INFO: Seed: 2987763060 00:12:15.791 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:15.791 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:15.791 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:15.791 INFO: A corpus is not provided, starting from an empty corpus 00:12:15.791 #2 INITED exec/s: 0 rss: 65Mb 00:12:15.791 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:15.791 This may also happen if the target rejected all inputs we tried so far 00:12:15.791 [2024-11-26 18:08:53.216854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:15.791 [2024-11-26 18:08:53.216882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:15.791 [2024-11-26 18:08:53.216932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:15.791 [2024-11-26 18:08:53.216944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.050 NEW_FUNC[1/718]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:12:16.050 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:16.051 #38 NEW cov: 12357 ft: 12353 corp: 2/43b lim: 90 exec/s: 0 rss: 72Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:12:16.051 [2024-11-26 18:08:53.407480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.051 [2024-11-26 18:08:53.407511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.051 [2024-11-26 18:08:53.407581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.051 [2024-11-26 18:08:53.407593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.051 [2024-11-26 18:08:53.407647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.051 [2024-11-26 18:08:53.407659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.051 #51 NEW cov: 12471 ft: 13300 corp: 3/97b lim: 90 exec/s: 0 rss: 73Mb L: 54/54 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:12:16.051 [2024-11-26 18:08:53.447333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.051 [2024-11-26 18:08:53.447355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.051 [2024-11-26 18:08:53.447408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.051 [2024-11-26 18:08:53.447419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.051 #57 NEW cov: 12477 ft: 13451 corp: 4/139b lim: 90 exec/s: 0 rss: 73Mb L: 42/54 MS: 1 CopyPart- 00:12:16.310 [2024-11-26 18:08:53.507531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.507558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.507628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.310 [2024-11-26 18:08:53.507641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.310 #58 NEW cov: 12562 ft: 13679 corp: 5/180b lim: 90 exec/s: 0 rss: 73Mb L: 41/54 MS: 1 EraseBytes- 00:12:16.310 [2024-11-26 18:08:53.567985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.568008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.568078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.310 [2024-11-26 18:08:53.568091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.568142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.310 [2024-11-26 18:08:53.568154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.568205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:16.310 [2024-11-26 18:08:53.568218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:16.310 #59 NEW cov: 12562 ft: 14083 corp: 6/258b lim: 90 exec/s: 0 rss: 73Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:12:16.310 [2024-11-26 18:08:53.627803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.627826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.627894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.310 [2024-11-26 18:08:53.627911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.310 #65 NEW cov: 12562 ft: 14320 corp: 7/308b lim: 90 exec/s: 0 rss: 73Mb L: 50/78 MS: 1 CMP- DE: "\000\205F^u\014PX"- 00:12:16.310 [2024-11-26 18:08:53.667776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.667800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 #66 NEW cov: 12562 ft: 15188 corp: 8/329b lim: 90 exec/s: 0 rss: 73Mb L: 21/78 MS: 1 EraseBytes- 00:12:16.310 [2024-11-26 18:08:53.708081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.708105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.708172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.310 [2024-11-26 18:08:53.708187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.310 #67 NEW cov: 12562 ft: 15282 corp: 9/378b lim: 90 exec/s: 0 rss: 73Mb L: 49/78 MS: 1 PersAutoDict- DE: "\000\205F^u\014PX"- 00:12:16.310 [2024-11-26 18:08:53.748487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.310 [2024-11-26 18:08:53.748510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.748557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.310 [2024-11-26 18:08:53.748574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.748619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.310 [2024-11-26 18:08:53.748631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.310 [2024-11-26 18:08:53.748683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:16.310 [2024-11-26 18:08:53.748695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:16.569 #68 NEW cov: 12562 ft: 15357 corp: 10/457b lim: 90 exec/s: 0 rss: 73Mb L: 79/79 MS: 1 CopyPart- 00:12:16.569 [2024-11-26 18:08:53.788298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.569 [2024-11-26 18:08:53.788320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.569 [2024-11-26 18:08:53.788377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.569 [2024-11-26 18:08:53.788396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.570 #69 NEW cov: 12562 ft: 15401 corp: 11/501b lim: 90 exec/s: 0 rss: 73Mb L: 44/79 MS: 1 CrossOver- 00:12:16.570 [2024-11-26 18:08:53.828737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.570 [2024-11-26 18:08:53.828761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.828807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.570 [2024-11-26 18:08:53.828819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.828877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.570 [2024-11-26 18:08:53.828890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.828944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:16.570 [2024-11-26 18:08:53.828957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:16.570 #70 NEW cov: 12562 ft: 15416 corp: 12/581b lim: 90 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 CrossOver- 00:12:16.570 [2024-11-26 18:08:53.888753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.570 [2024-11-26 18:08:53.888777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.888823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.570 [2024-11-26 18:08:53.888837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.888899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.570 [2024-11-26 18:08:53.888912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.570 #71 NEW cov: 12562 ft: 15450 corp: 13/635b lim: 90 exec/s: 0 rss: 73Mb L: 54/80 MS: 1 ChangeByte- 00:12:16.570 [2024-11-26 18:08:53.949078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.570 [2024-11-26 18:08:53.949101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.949157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.570 [2024-11-26 18:08:53.949173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.949225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.570 [2024-11-26 18:08:53.949238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:53.949292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:16.570 [2024-11-26 18:08:53.949304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:16.570 #72 NEW cov: 12562 ft: 15471 corp: 14/714b lim: 90 exec/s: 0 rss: 73Mb L: 79/80 MS: 1 InsertByte- 00:12:16.570 [2024-11-26 18:08:54.008918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.570 [2024-11-26 18:08:54.008941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.570 [2024-11-26 18:08:54.009011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.570 [2024-11-26 18:08:54.009023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.829 #73 NEW cov: 12562 ft: 15500 corp: 15/763b lim: 90 exec/s: 0 rss: 73Mb L: 49/80 MS: 1 ChangeBinInt- 00:12:16.829 [2024-11-26 18:08:54.068888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.829 [2024-11-26 18:08:54.068914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.829 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:16.829 #74 NEW cov: 12585 ft: 15545 corp: 16/797b lim: 90 exec/s: 0 rss: 74Mb L: 34/80 MS: 1 EraseBytes- 00:12:16.829 [2024-11-26 18:08:54.129254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.829 [2024-11-26 18:08:54.129277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.829 [2024-11-26 18:08:54.129346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.829 [2024-11-26 18:08:54.129358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.829 #75 NEW cov: 12585 ft: 15567 corp: 17/837b lim: 90 exec/s: 0 rss: 74Mb L: 40/80 MS: 1 EraseBytes- 00:12:16.829 [2024-11-26 18:08:54.189415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.829 [2024-11-26 18:08:54.189439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.829 [2024-11-26 18:08:54.189505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.829 [2024-11-26 18:08:54.189527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.830 #78 NEW cov: 12585 ft: 15602 corp: 18/875b lim: 90 exec/s: 78 rss: 74Mb L: 38/80 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:12:16.830 [2024-11-26 18:08:54.229897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.830 [2024-11-26 18:08:54.229920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.830 [2024-11-26 18:08:54.229964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.830 [2024-11-26 18:08:54.229976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:16.830 [2024-11-26 18:08:54.230055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:16.830 [2024-11-26 18:08:54.230068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:16.830 [2024-11-26 18:08:54.230123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:16.830 [2024-11-26 18:08:54.230136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:16.830 #79 NEW cov: 12585 ft: 15611 corp: 19/954b lim: 90 exec/s: 79 rss: 74Mb L: 79/80 MS: 1 CrossOver- 00:12:16.830 [2024-11-26 18:08:54.269670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:16.830 [2024-11-26 18:08:54.269693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:16.830 [2024-11-26 18:08:54.269742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:16.830 [2024-11-26 18:08:54.269753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 #80 NEW cov: 12585 ft: 15649 corp: 20/996b lim: 90 exec/s: 80 rss: 74Mb L: 42/80 MS: 1 ShuffleBytes- 00:12:17.089 [2024-11-26 18:08:54.309947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.309972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.310022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.310032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.310086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.089 [2024-11-26 18:08:54.310098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.089 #81 NEW cov: 12585 ft: 15665 corp: 21/1063b lim: 90 exec/s: 81 rss: 74Mb L: 67/80 MS: 1 EraseBytes- 00:12:17.089 [2024-11-26 18:08:54.349893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.349916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.349984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.349994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 #82 NEW cov: 12585 ft: 15712 corp: 22/1105b lim: 90 exec/s: 82 rss: 74Mb L: 42/80 MS: 1 ChangeBinInt- 00:12:17.089 [2024-11-26 18:08:54.390009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.390033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.390082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.390093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 #83 NEW cov: 12585 ft: 15722 corp: 23/1146b lim: 90 exec/s: 83 rss: 74Mb L: 41/80 MS: 1 ChangeBinInt- 00:12:17.089 [2024-11-26 18:08:54.430465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.430489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.430540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.430552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.430595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.089 [2024-11-26 18:08:54.430607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.430660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:17.089 [2024-11-26 18:08:54.430673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:17.089 #84 NEW cov: 12585 ft: 15732 corp: 24/1227b lim: 90 exec/s: 84 rss: 74Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:12:17.089 [2024-11-26 18:08:54.490300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.490323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.490384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.490398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 #85 NEW cov: 12585 ft: 15762 corp: 25/1279b lim: 90 exec/s: 85 rss: 74Mb L: 52/81 MS: 1 EraseBytes- 00:12:17.089 [2024-11-26 18:08:54.530611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.089 [2024-11-26 18:08:54.530634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.530683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.089 [2024-11-26 18:08:54.530696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.089 [2024-11-26 18:08:54.530737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.089 [2024-11-26 18:08:54.530749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.349 #86 NEW cov: 12585 ft: 15774 corp: 26/1349b lim: 90 exec/s: 86 rss: 74Mb L: 70/81 MS: 1 EraseBytes- 00:12:17.349 [2024-11-26 18:08:54.570559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.349 [2024-11-26 18:08:54.570582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.349 [2024-11-26 18:08:54.570635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.349 [2024-11-26 18:08:54.570645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.349 #87 NEW cov: 12585 ft: 15796 corp: 27/1399b lim: 90 exec/s: 87 rss: 74Mb L: 50/81 MS: 1 PersAutoDict- DE: "\000\205F^u\014PX"- 00:12:17.349 [2024-11-26 18:08:54.630887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.349 [2024-11-26 18:08:54.630910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.349 [2024-11-26 18:08:54.630974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.349 [2024-11-26 18:08:54.630990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.349 [2024-11-26 18:08:54.631043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.349 [2024-11-26 18:08:54.631056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.349 #88 NEW cov: 12585 ft: 15806 corp: 28/1467b lim: 90 exec/s: 88 rss: 74Mb L: 68/81 MS: 1 CopyPart- 00:12:17.349 [2024-11-26 18:08:54.690679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.349 [2024-11-26 18:08:54.690702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.349 #89 NEW cov: 12585 ft: 15822 corp: 29/1501b lim: 90 exec/s: 89 rss: 74Mb L: 34/81 MS: 1 CopyPart- 00:12:17.349 [2024-11-26 18:08:54.750896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.349 [2024-11-26 18:08:54.750920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.349 #90 NEW cov: 12585 ft: 15827 corp: 30/1532b lim: 90 exec/s: 90 rss: 74Mb L: 31/81 MS: 1 EraseBytes- 00:12:17.608 [2024-11-26 18:08:54.811597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:54.811620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.811666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:54.811677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.811728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.608 [2024-11-26 18:08:54.811741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.811792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:17.608 [2024-11-26 18:08:54.811804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:17.608 #91 NEW cov: 12585 ft: 15835 corp: 31/1612b lim: 90 exec/s: 91 rss: 74Mb L: 80/81 MS: 1 InsertByte- 00:12:17.608 [2024-11-26 18:08:54.851665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:54.851689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.851755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:54.851770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.851822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.608 [2024-11-26 18:08:54.851835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.851887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:17.608 [2024-11-26 18:08:54.851899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:17.608 #92 NEW cov: 12585 ft: 15871 corp: 32/1691b lim: 90 exec/s: 92 rss: 74Mb L: 79/81 MS: 1 CrossOver- 00:12:17.608 [2024-11-26 18:08:54.891666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:54.891688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.891737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:54.891761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.891814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.608 [2024-11-26 18:08:54.891826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.608 #93 NEW cov: 12585 ft: 15912 corp: 33/1746b lim: 90 exec/s: 93 rss: 74Mb L: 55/81 MS: 1 InsertRepeatedBytes- 00:12:17.608 [2024-11-26 18:08:54.931588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:54.931611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.931677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:54.931693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 #94 NEW cov: 12585 ft: 15921 corp: 34/1786b lim: 90 exec/s: 94 rss: 74Mb L: 40/81 MS: 1 ShuffleBytes- 00:12:17.608 [2024-11-26 18:08:54.991904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:54.991927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.991991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:54.992006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:54.992059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.608 [2024-11-26 18:08:54.992071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.608 #95 NEW cov: 12585 ft: 15923 corp: 35/1841b lim: 90 exec/s: 95 rss: 75Mb L: 55/81 MS: 1 EraseBytes- 00:12:17.608 [2024-11-26 18:08:55.052236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.608 [2024-11-26 18:08:55.052260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:55.052307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.608 [2024-11-26 18:08:55.052327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:55.052380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:17.608 [2024-11-26 18:08:55.052393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:17.608 [2024-11-26 18:08:55.052444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:17.609 [2024-11-26 18:08:55.052456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:17.867 #96 NEW cov: 12585 ft: 15927 corp: 36/1915b lim: 90 exec/s: 96 rss: 75Mb L: 74/81 MS: 1 CopyPart- 00:12:17.867 [2024-11-26 18:08:55.112043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.867 [2024-11-26 18:08:55.112065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.868 [2024-11-26 18:08:55.112133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.868 [2024-11-26 18:08:55.112143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.868 #97 NEW cov: 12585 ft: 15931 corp: 37/1956b lim: 90 exec/s: 97 rss: 75Mb L: 41/81 MS: 1 InsertByte- 00:12:17.868 [2024-11-26 18:08:55.152161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.868 [2024-11-26 18:08:55.152183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.868 [2024-11-26 18:08:55.152251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.868 [2024-11-26 18:08:55.152262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.868 #98 NEW cov: 12585 ft: 15932 corp: 38/2000b lim: 90 exec/s: 98 rss: 75Mb L: 44/81 MS: 1 CrossOver- 00:12:17.868 [2024-11-26 18:08:55.192278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:17.868 [2024-11-26 18:08:55.192300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:17.868 [2024-11-26 18:08:55.192370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:17.868 [2024-11-26 18:08:55.192386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:17.868 #99 NEW cov: 12585 ft: 15975 corp: 39/2049b lim: 90 exec/s: 49 rss: 75Mb L: 49/81 MS: 1 PersAutoDict- DE: "\000\205F^u\014PX"- 00:12:17.868 #99 DONE cov: 12585 ft: 15975 corp: 39/2049b lim: 90 exec/s: 49 rss: 75Mb 00:12:17.868 ###### Recommended dictionary. ###### 00:12:17.868 "\000\205F^u\014PX" # Uses: 3 00:12:17.868 ###### End of recommended dictionary. ###### 00:12:17.868 Done 99 runs in 2 second(s) 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:18.127 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:18.128 18:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:12:18.128 [2024-11-26 18:08:55.372992] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:18.128 [2024-11-26 18:08:55.373039] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295347 ] 00:12:18.387 [2024-11-26 18:08:55.581515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.387 [2024-11-26 18:08:55.622837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.387 [2024-11-26 18:08:55.685278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.387 [2024-11-26 18:08:55.701478] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:12:18.387 INFO: Running with entropic power schedule (0xFF, 100). 00:12:18.387 INFO: Seed: 1220806048 00:12:18.387 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:18.387 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:18.387 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:18.387 INFO: A corpus is not provided, starting from an empty corpus 00:12:18.387 #2 INITED exec/s: 0 rss: 65Mb 00:12:18.387 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:18.387 This may also happen if the target rejected all inputs we tried so far 00:12:18.387 [2024-11-26 18:08:55.749893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.387 [2024-11-26 18:08:55.749922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.646 NEW_FUNC[1/715]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:12:18.646 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:18.646 #6 NEW cov: 12304 ft: 12288 corp: 2/11b lim: 50 exec/s: 0 rss: 73Mb L: 10/10 MS: 4 ChangeBinInt-ShuffleBytes-CMP-InsertByte- DE: ">\000\000\000\000\000\000\000"- 00:12:18.646 [2024-11-26 18:08:55.950351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.647 [2024-11-26 18:08:55.950385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.647 NEW_FUNC[1/3]: 0x17c7dc8 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3959 00:12:18.647 NEW_FUNC[2/3]: 0x19a40f8 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1615 00:12:18.647 #7 NEW cov: 12446 ft: 12826 corp: 3/22b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 InsertByte- 00:12:18.647 [2024-11-26 18:08:56.010456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.647 [2024-11-26 18:08:56.010481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.647 #8 NEW cov: 12452 ft: 13114 corp: 4/33b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ChangeByte- 00:12:18.647 [2024-11-26 18:08:56.070659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.647 [2024-11-26 18:08:56.070683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.647 #9 NEW cov: 12537 ft: 13484 corp: 5/44b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ShuffleBytes- 00:12:18.906 [2024-11-26 18:08:56.110730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.906 [2024-11-26 18:08:56.110754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.906 #10 NEW cov: 12537 ft: 13562 corp: 6/55b lim: 50 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:12:18.906 [2024-11-26 18:08:56.170918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.906 [2024-11-26 18:08:56.170942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.906 #11 NEW cov: 12537 ft: 13595 corp: 7/65b lim: 50 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 EraseBytes- 00:12:18.906 [2024-11-26 18:08:56.231076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.906 [2024-11-26 18:08:56.231098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.906 #12 NEW cov: 12537 ft: 13649 corp: 8/76b lim: 50 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 ChangeBinInt- 00:12:18.906 [2024-11-26 18:08:56.271197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.906 [2024-11-26 18:08:56.271220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.906 #13 NEW cov: 12537 ft: 13791 corp: 9/88b lim: 50 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 InsertByte- 00:12:18.906 [2024-11-26 18:08:56.311324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:18.906 [2024-11-26 18:08:56.311347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:18.906 #14 NEW cov: 12537 ft: 13864 corp: 10/99b lim: 50 exec/s: 0 rss: 74Mb L: 11/12 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:12:19.165 [2024-11-26 18:08:56.371831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.165 [2024-11-26 18:08:56.371854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.165 [2024-11-26 18:08:56.371896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:19.165 [2024-11-26 18:08:56.371909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:19.165 [2024-11-26 18:08:56.371937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:19.165 [2024-11-26 18:08:56.371949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:19.165 #16 NEW cov: 12537 ft: 14687 corp: 11/137b lim: 50 exec/s: 0 rss: 74Mb L: 38/38 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:12:19.165 [2024-11-26 18:08:56.411600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.165 [2024-11-26 18:08:56.411623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.165 #17 NEW cov: 12537 ft: 14699 corp: 12/149b lim: 50 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 InsertByte- 00:12:19.165 [2024-11-26 18:08:56.451735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.165 [2024-11-26 18:08:56.451759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.165 #18 NEW cov: 12537 ft: 14707 corp: 13/161b lim: 50 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 CMP- DE: "\001\000"- 00:12:19.165 [2024-11-26 18:08:56.511921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.165 [2024-11-26 18:08:56.511944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.165 #19 NEW cov: 12537 ft: 14709 corp: 14/174b lim: 50 exec/s: 0 rss: 74Mb L: 13/38 MS: 1 PersAutoDict- DE: "\001\000"- 00:12:19.165 [2024-11-26 18:08:56.552028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.166 [2024-11-26 18:08:56.552051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.166 #20 NEW cov: 12537 ft: 14742 corp: 15/186b lim: 50 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 ChangeBit- 00:12:19.425 [2024-11-26 18:08:56.612704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.612728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.425 [2024-11-26 18:08:56.612772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:19.425 [2024-11-26 18:08:56.612785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:19.425 [2024-11-26 18:08:56.612810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:19.425 [2024-11-26 18:08:56.612823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:19.425 [2024-11-26 18:08:56.612873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:19.425 [2024-11-26 18:08:56.612884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:19.425 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:19.425 #21 NEW cov: 12560 ft: 15103 corp: 16/227b lim: 50 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:12:19.425 [2024-11-26 18:08:56.652298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.652321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.425 #22 NEW cov: 12560 ft: 15173 corp: 17/239b lim: 50 exec/s: 0 rss: 74Mb L: 12/41 MS: 1 ChangeBit- 00:12:19.425 [2024-11-26 18:08:56.692424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.692446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.425 #23 NEW cov: 12560 ft: 15251 corp: 18/252b lim: 50 exec/s: 23 rss: 74Mb L: 13/41 MS: 1 CrossOver- 00:12:19.425 [2024-11-26 18:08:56.752594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.752617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.425 #24 NEW cov: 12560 ft: 15260 corp: 19/265b lim: 50 exec/s: 24 rss: 74Mb L: 13/41 MS: 1 ShuffleBytes- 00:12:19.425 [2024-11-26 18:08:56.793036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.793058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.425 [2024-11-26 18:08:56.793120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:19.425 [2024-11-26 18:08:56.793132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:19.425 [2024-11-26 18:08:56.793180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:19.425 [2024-11-26 18:08:56.793193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:19.425 #25 NEW cov: 12560 ft: 15302 corp: 20/302b lim: 50 exec/s: 25 rss: 74Mb L: 37/41 MS: 1 InsertRepeatedBytes- 00:12:19.425 [2024-11-26 18:08:56.852915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.425 [2024-11-26 18:08:56.852937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #26 NEW cov: 12560 ft: 15341 corp: 21/317b lim: 50 exec/s: 26 rss: 74Mb L: 15/41 MS: 1 CopyPart- 00:12:19.685 [2024-11-26 18:08:56.892952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:56.892975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #27 NEW cov: 12560 ft: 15346 corp: 22/328b lim: 50 exec/s: 27 rss: 74Mb L: 11/41 MS: 1 ShuffleBytes- 00:12:19.685 [2024-11-26 18:08:56.933107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:56.933132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #33 NEW cov: 12560 ft: 15354 corp: 23/340b lim: 50 exec/s: 33 rss: 74Mb L: 12/41 MS: 1 ChangeBinInt- 00:12:19.685 [2024-11-26 18:08:56.993283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:56.993308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #34 NEW cov: 12560 ft: 15366 corp: 24/351b lim: 50 exec/s: 34 rss: 74Mb L: 11/41 MS: 1 CrossOver- 00:12:19.685 [2024-11-26 18:08:57.033395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:57.033418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #35 NEW cov: 12560 ft: 15380 corp: 25/363b lim: 50 exec/s: 35 rss: 74Mb L: 12/41 MS: 1 InsertByte- 00:12:19.685 [2024-11-26 18:08:57.073500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:57.073523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.685 #36 NEW cov: 12560 ft: 15415 corp: 26/376b lim: 50 exec/s: 36 rss: 74Mb L: 13/41 MS: 1 PersAutoDict- DE: "\001\000"- 00:12:19.685 [2024-11-26 18:08:57.113597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.685 [2024-11-26 18:08:57.113619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.944 #37 NEW cov: 12560 ft: 15427 corp: 27/387b lim: 50 exec/s: 37 rss: 75Mb L: 11/41 MS: 1 ChangeBit- 00:12:19.944 [2024-11-26 18:08:57.173775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.945 [2024-11-26 18:08:57.173799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.945 #38 NEW cov: 12560 ft: 15439 corp: 28/398b lim: 50 exec/s: 38 rss: 75Mb L: 11/41 MS: 1 ShuffleBytes- 00:12:19.945 [2024-11-26 18:08:57.213898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.945 [2024-11-26 18:08:57.213921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.945 #39 NEW cov: 12560 ft: 15455 corp: 29/411b lim: 50 exec/s: 39 rss: 75Mb L: 13/41 MS: 1 ShuffleBytes- 00:12:19.945 [2024-11-26 18:08:57.254006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.945 [2024-11-26 18:08:57.254029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.945 #40 NEW cov: 12560 ft: 15477 corp: 30/425b lim: 50 exec/s: 40 rss: 75Mb L: 14/41 MS: 1 InsertByte- 00:12:19.945 [2024-11-26 18:08:57.314162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.945 [2024-11-26 18:08:57.314184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:19.945 #41 NEW cov: 12560 ft: 15482 corp: 31/436b lim: 50 exec/s: 41 rss: 75Mb L: 11/41 MS: 1 ChangeByte- 00:12:19.945 [2024-11-26 18:08:57.354261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:19.945 [2024-11-26 18:08:57.354290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.203 #42 NEW cov: 12560 ft: 15491 corp: 32/450b lim: 50 exec/s: 42 rss: 75Mb L: 14/41 MS: 1 ChangeByte- 00:12:20.204 [2024-11-26 18:08:57.414467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.204 [2024-11-26 18:08:57.414490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.204 #43 NEW cov: 12560 ft: 15498 corp: 33/462b lim: 50 exec/s: 43 rss: 75Mb L: 12/41 MS: 1 PersAutoDict- DE: "\001\000"- 00:12:20.204 [2024-11-26 18:08:57.474686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.204 [2024-11-26 18:08:57.474709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.204 #44 NEW cov: 12560 ft: 15524 corp: 34/475b lim: 50 exec/s: 44 rss: 75Mb L: 13/41 MS: 1 InsertByte- 00:12:20.204 [2024-11-26 18:08:57.534937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.204 [2024-11-26 18:08:57.534960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.204 [2024-11-26 18:08:57.535026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:20.204 [2024-11-26 18:08:57.535036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:20.204 #45 NEW cov: 12560 ft: 15810 corp: 35/498b lim: 50 exec/s: 45 rss: 75Mb L: 23/41 MS: 1 CopyPart- 00:12:20.204 [2024-11-26 18:08:57.574926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.204 [2024-11-26 18:08:57.574950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.204 #46 NEW cov: 12560 ft: 15830 corp: 36/510b lim: 50 exec/s: 46 rss: 75Mb L: 12/41 MS: 1 ChangeBinInt- 00:12:20.204 [2024-11-26 18:08:57.635087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.204 [2024-11-26 18:08:57.635110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.463 #47 NEW cov: 12560 ft: 15838 corp: 37/524b lim: 50 exec/s: 47 rss: 75Mb L: 14/41 MS: 1 CopyPart- 00:12:20.463 [2024-11-26 18:08:57.675216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.463 [2024-11-26 18:08:57.675239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.463 #48 NEW cov: 12560 ft: 15844 corp: 38/538b lim: 50 exec/s: 48 rss: 75Mb L: 14/41 MS: 1 CMP- DE: "\377\377\377\017"- 00:12:20.463 [2024-11-26 18:08:57.735378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:20.463 [2024-11-26 18:08:57.735400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.463 #50 NEW cov: 12560 ft: 15854 corp: 39/550b lim: 50 exec/s: 25 rss: 75Mb L: 12/41 MS: 2 EraseBytes-CrossOver- 00:12:20.463 #50 DONE cov: 12560 ft: 15854 corp: 39/550b lim: 50 exec/s: 25 rss: 75Mb 00:12:20.463 ###### Recommended dictionary. ###### 00:12:20.463 ">\000\000\000\000\000\000\000" # Uses: 2 00:12:20.463 "\001\000" # Uses: 4 00:12:20.463 "\377\377\377\017" # Uses: 0 00:12:20.463 ###### End of recommended dictionary. ###### 00:12:20.463 Done 50 runs in 2 second(s) 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:12:20.463 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:20.722 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:20.722 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:20.722 18:08:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:12:20.722 [2024-11-26 18:08:57.934481] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:20.722 [2024-11-26 18:08:57.934538] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295860 ] 00:12:20.722 [2024-11-26 18:08:58.148808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.982 [2024-11-26 18:08:58.189483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.982 [2024-11-26 18:08:58.251842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.982 [2024-11-26 18:08:58.268034] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:12:20.982 INFO: Running with entropic power schedule (0xFF, 100). 00:12:20.982 INFO: Seed: 3789795230 00:12:20.982 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:20.982 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:20.982 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:20.982 INFO: A corpus is not provided, starting from an empty corpus 00:12:20.982 #2 INITED exec/s: 0 rss: 66Mb 00:12:20.982 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:20.982 This may also happen if the target rejected all inputs we tried so far 00:12:20.982 [2024-11-26 18:08:58.313982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:20.982 [2024-11-26 18:08:58.314009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:20.982 [2024-11-26 18:08:58.314061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:20.982 [2024-11-26 18:08:58.314073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:20.982 [2024-11-26 18:08:58.314124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:20.982 [2024-11-26 18:08:58.314140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:20.982 [2024-11-26 18:08:58.314194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:20.982 [2024-11-26 18:08:58.314206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:21.241 NEW_FUNC[1/718]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:12:21.241 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:21.241 #10 NEW cov: 12359 ft: 12358 corp: 2/72b lim: 85 exec/s: 0 rss: 73Mb L: 71/71 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:12:21.241 [2024-11-26 18:08:58.463796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.241 [2024-11-26 18:08:58.463828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.241 #13 NEW cov: 12472 ft: 13861 corp: 3/96b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:12:21.241 [2024-11-26 18:08:58.503891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.241 [2024-11-26 18:08:58.503916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.241 #14 NEW cov: 12478 ft: 14031 corp: 4/120b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 1 ChangeBit- 00:12:21.241 [2024-11-26 18:08:58.564222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.241 [2024-11-26 18:08:58.564246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.241 [2024-11-26 18:08:58.564303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:21.241 [2024-11-26 18:08:58.564314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:21.241 #17 NEW cov: 12563 ft: 14611 corp: 5/158b lim: 85 exec/s: 0 rss: 73Mb L: 38/71 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:12:21.241 [2024-11-26 18:08:58.604123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.242 [2024-11-26 18:08:58.604146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.242 #18 NEW cov: 12563 ft: 14687 corp: 6/182b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:21.242 [2024-11-26 18:08:58.664302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.242 [2024-11-26 18:08:58.664326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.242 #19 NEW cov: 12563 ft: 14803 corp: 7/206b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 1 ChangeBinInt- 00:12:21.501 [2024-11-26 18:08:58.704415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.704439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.501 #20 NEW cov: 12563 ft: 14949 corp: 8/229b lim: 85 exec/s: 0 rss: 73Mb L: 23/71 MS: 1 EraseBytes- 00:12:21.501 [2024-11-26 18:08:58.744713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.744736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.501 [2024-11-26 18:08:58.744789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:21.501 [2024-11-26 18:08:58.744799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:21.501 #24 NEW cov: 12563 ft: 14977 corp: 9/268b lim: 85 exec/s: 0 rss: 73Mb L: 39/71 MS: 4 ChangeByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:12:21.501 [2024-11-26 18:08:58.784667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.784691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.501 #25 NEW cov: 12563 ft: 15058 corp: 10/292b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 1 ChangeByte- 00:12:21.501 [2024-11-26 18:08:58.824824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.824849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.501 #26 NEW cov: 12563 ft: 15093 corp: 11/316b lim: 85 exec/s: 0 rss: 73Mb L: 24/71 MS: 1 ChangeBinInt- 00:12:21.501 [2024-11-26 18:08:58.884972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.884995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.501 #27 NEW cov: 12563 ft: 15099 corp: 12/340b lim: 85 exec/s: 0 rss: 74Mb L: 24/71 MS: 1 ChangeBinInt- 00:12:21.501 [2024-11-26 18:08:58.945131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.501 [2024-11-26 18:08:58.945156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.760 #28 NEW cov: 12563 ft: 15109 corp: 13/364b lim: 85 exec/s: 0 rss: 74Mb L: 24/71 MS: 1 ChangeByte- 00:12:21.760 [2024-11-26 18:08:58.985190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.760 [2024-11-26 18:08:58.985213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.761 #29 NEW cov: 12563 ft: 15154 corp: 14/388b lim: 85 exec/s: 0 rss: 74Mb L: 24/71 MS: 1 ChangeBinInt- 00:12:21.761 [2024-11-26 18:08:59.045452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.761 [2024-11-26 18:08:59.045474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.761 #33 NEW cov: 12563 ft: 15170 corp: 15/406b lim: 85 exec/s: 0 rss: 74Mb L: 18/71 MS: 4 EraseBytes-ChangeBit-EraseBytes-CopyPart- 00:12:21.761 [2024-11-26 18:08:59.085516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.761 [2024-11-26 18:08:59.085540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.761 #34 NEW cov: 12563 ft: 15189 corp: 16/430b lim: 85 exec/s: 0 rss: 74Mb L: 24/71 MS: 1 InsertByte- 00:12:21.761 [2024-11-26 18:08:59.145699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.761 [2024-11-26 18:08:59.145723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.761 #35 NEW cov: 12563 ft: 15206 corp: 17/448b lim: 85 exec/s: 0 rss: 74Mb L: 18/71 MS: 1 CopyPart- 00:12:21.761 [2024-11-26 18:08:59.206065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:21.761 [2024-11-26 18:08:59.206088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:21.761 [2024-11-26 18:08:59.206148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:21.761 [2024-11-26 18:08:59.206159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:22.019 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:22.019 #36 NEW cov: 12586 ft: 15293 corp: 18/486b lim: 85 exec/s: 0 rss: 74Mb L: 38/71 MS: 1 ChangeBit- 00:12:22.019 [2024-11-26 18:08:59.266046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.019 [2024-11-26 18:08:59.266070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.019 #37 NEW cov: 12586 ft: 15302 corp: 19/510b lim: 85 exec/s: 0 rss: 74Mb L: 24/71 MS: 1 CrossOver- 00:12:22.019 [2024-11-26 18:08:59.306139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.019 [2024-11-26 18:08:59.306162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.019 #38 NEW cov: 12586 ft: 15305 corp: 20/534b lim: 85 exec/s: 38 rss: 74Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:22.019 [2024-11-26 18:08:59.346494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.019 [2024-11-26 18:08:59.346517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.019 [2024-11-26 18:08:59.346586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:22.019 [2024-11-26 18:08:59.346597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:22.019 #39 NEW cov: 12586 ft: 15337 corp: 21/572b lim: 85 exec/s: 39 rss: 74Mb L: 38/71 MS: 1 ChangeBit- 00:12:22.019 [2024-11-26 18:08:59.386402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.019 [2024-11-26 18:08:59.386425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.019 #40 NEW cov: 12586 ft: 15399 corp: 22/596b lim: 85 exec/s: 40 rss: 74Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:22.019 [2024-11-26 18:08:59.446575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.019 [2024-11-26 18:08:59.446597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.278 #41 NEW cov: 12586 ft: 15417 corp: 23/620b lim: 85 exec/s: 41 rss: 74Mb L: 24/71 MS: 1 ChangeBinInt- 00:12:22.278 [2024-11-26 18:08:59.506736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.278 [2024-11-26 18:08:59.506759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.278 #42 NEW cov: 12586 ft: 15432 corp: 24/644b lim: 85 exec/s: 42 rss: 74Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:22.278 [2024-11-26 18:08:59.546812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.278 [2024-11-26 18:08:59.546836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.278 #43 NEW cov: 12586 ft: 15465 corp: 25/668b lim: 85 exec/s: 43 rss: 74Mb L: 24/71 MS: 1 ChangeByte- 00:12:22.278 [2024-11-26 18:08:59.586961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.278 [2024-11-26 18:08:59.586984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.278 #44 NEW cov: 12586 ft: 15476 corp: 26/697b lim: 85 exec/s: 44 rss: 74Mb L: 29/71 MS: 1 CopyPart- 00:12:22.278 [2024-11-26 18:08:59.627090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.278 [2024-11-26 18:08:59.627113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.278 #45 NEW cov: 12586 ft: 15529 corp: 27/721b lim: 85 exec/s: 45 rss: 74Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:22.278 [2024-11-26 18:08:59.687259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.278 [2024-11-26 18:08:59.687282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.537 #46 NEW cov: 12586 ft: 15541 corp: 28/745b lim: 85 exec/s: 46 rss: 75Mb L: 24/71 MS: 1 ShuffleBytes- 00:12:22.537 [2024-11-26 18:08:59.747441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.537 [2024-11-26 18:08:59.747465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.537 #47 NEW cov: 12586 ft: 15546 corp: 29/769b lim: 85 exec/s: 47 rss: 75Mb L: 24/71 MS: 1 ChangeByte- 00:12:22.537 [2024-11-26 18:08:59.807602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.537 [2024-11-26 18:08:59.807626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.537 #48 NEW cov: 12586 ft: 15579 corp: 30/802b lim: 85 exec/s: 48 rss: 75Mb L: 33/71 MS: 1 InsertRepeatedBytes- 00:12:22.537 [2024-11-26 18:08:59.848229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.537 [2024-11-26 18:08:59.848255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.537 [2024-11-26 18:08:59.848305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:22.537 [2024-11-26 18:08:59.848318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:22.537 [2024-11-26 18:08:59.848371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:22.537 [2024-11-26 18:08:59.848389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:22.537 [2024-11-26 18:08:59.848442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:22.537 [2024-11-26 18:08:59.848456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:22.537 #49 NEW cov: 12586 ft: 15596 corp: 31/882b lim: 85 exec/s: 49 rss: 75Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:12:22.537 [2024-11-26 18:08:59.907880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.537 [2024-11-26 18:08:59.907904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.537 #50 NEW cov: 12586 ft: 15643 corp: 32/912b lim: 85 exec/s: 50 rss: 75Mb L: 30/80 MS: 1 InsertRepeatedBytes- 00:12:22.537 [2024-11-26 18:08:59.968132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.537 [2024-11-26 18:08:59.968155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.796 #51 NEW cov: 12586 ft: 15709 corp: 33/936b lim: 85 exec/s: 51 rss: 75Mb L: 24/80 MS: 1 ChangeBit- 00:12:22.796 [2024-11-26 18:09:00.028283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.796 [2024-11-26 18:09:00.028310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.796 #52 NEW cov: 12586 ft: 15711 corp: 34/960b lim: 85 exec/s: 52 rss: 75Mb L: 24/80 MS: 1 ChangeByte- 00:12:22.796 [2024-11-26 18:09:00.068364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.796 [2024-11-26 18:09:00.068396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.796 #53 NEW cov: 12586 ft: 15720 corp: 35/984b lim: 85 exec/s: 53 rss: 75Mb L: 24/80 MS: 1 ChangeBit- 00:12:22.796 [2024-11-26 18:09:00.108976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.796 [2024-11-26 18:09:00.109004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.109053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:22.796 [2024-11-26 18:09:00.109065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.109114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:22.796 [2024-11-26 18:09:00.109125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.109180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:22.796 [2024-11-26 18:09:00.109192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:22.796 #54 NEW cov: 12586 ft: 15757 corp: 36/1055b lim: 85 exec/s: 54 rss: 75Mb L: 71/80 MS: 1 ShuffleBytes- 00:12:22.796 [2024-11-26 18:09:00.169138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.796 [2024-11-26 18:09:00.169164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.169213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:22.796 [2024-11-26 18:09:00.169226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.169270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:22.796 [2024-11-26 18:09:00.169282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:22.796 [2024-11-26 18:09:00.169334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:22.796 [2024-11-26 18:09:00.169347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:22.796 #55 NEW cov: 12586 ft: 15769 corp: 37/1126b lim: 85 exec/s: 55 rss: 75Mb L: 71/80 MS: 1 ChangeByte- 00:12:22.796 [2024-11-26 18:09:00.228825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:22.796 [2024-11-26 18:09:00.228851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.057 #56 NEW cov: 12586 ft: 15781 corp: 38/1144b lim: 85 exec/s: 56 rss: 75Mb L: 18/80 MS: 1 ChangeByte- 00:12:23.057 [2024-11-26 18:09:00.268933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:23.057 [2024-11-26 18:09:00.268957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.057 #57 NEW cov: 12586 ft: 15813 corp: 39/1168b lim: 85 exec/s: 57 rss: 75Mb L: 24/80 MS: 1 ChangeByte- 00:12:23.057 [2024-11-26 18:09:00.309044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:23.057 [2024-11-26 18:09:00.309068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.057 #58 NEW cov: 12586 ft: 15829 corp: 40/1189b lim: 85 exec/s: 29 rss: 75Mb L: 21/80 MS: 1 EraseBytes- 00:12:23.057 #58 DONE cov: 12586 ft: 15829 corp: 40/1189b lim: 85 exec/s: 29 rss: 75Mb 00:12:23.057 Done 58 runs in 2 second(s) 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:23.057 18:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:12:23.057 [2024-11-26 18:09:00.487782] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:23.057 [2024-11-26 18:09:00.487830] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296417 ] 00:12:23.317 [2024-11-26 18:09:00.694027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.317 [2024-11-26 18:09:00.733759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.577 [2024-11-26 18:09:00.796134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.577 [2024-11-26 18:09:00.812333] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:12:23.577 INFO: Running with entropic power schedule (0xFF, 100). 00:12:23.577 INFO: Seed: 2036829836 00:12:23.577 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:23.577 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:23.577 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:23.577 INFO: A corpus is not provided, starting from an empty corpus 00:12:23.577 #2 INITED exec/s: 0 rss: 65Mb 00:12:23.577 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:23.577 This may also happen if the target rejected all inputs we tried so far 00:12:23.577 [2024-11-26 18:09:00.883961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:23.577 [2024-11-26 18:09:00.884011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.577 [2024-11-26 18:09:00.884124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:23.577 [2024-11-26 18:09:00.884140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:23.577 [2024-11-26 18:09:00.884244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:23.577 [2024-11-26 18:09:00.884264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:23.837 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:12:23.837 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:23.837 #18 NEW cov: 12287 ft: 12262 corp: 2/16b lim: 25 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:12:23.837 [2024-11-26 18:09:01.114437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:23.837 [2024-11-26 18:09:01.114487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.837 [2024-11-26 18:09:01.114600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:23.837 [2024-11-26 18:09:01.114619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:23.837 [2024-11-26 18:09:01.114726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:23.837 [2024-11-26 18:09:01.114748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:23.837 NEW_FUNC[1/1]: 0x17ea1d8 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1292 00:12:23.837 #19 NEW cov: 12405 ft: 12949 corp: 3/32b lim: 25 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 CrossOver- 00:12:23.837 [2024-11-26 18:09:01.204743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:23.837 [2024-11-26 18:09:01.204782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.837 [2024-11-26 18:09:01.204850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:23.837 [2024-11-26 18:09:01.204874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:23.837 [2024-11-26 18:09:01.204948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:23.837 [2024-11-26 18:09:01.204966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:23.837 #25 NEW cov: 12411 ft: 13167 corp: 4/47b lim: 25 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 ChangeByte- 00:12:23.837 [2024-11-26 18:09:01.264549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:23.837 [2024-11-26 18:09:01.264585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:23.837 [2024-11-26 18:09:01.264655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:23.837 [2024-11-26 18:09:01.264675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.095 #26 NEW cov: 12496 ft: 13710 corp: 5/59b lim: 25 exec/s: 0 rss: 73Mb L: 12/16 MS: 1 EraseBytes- 00:12:24.095 [2024-11-26 18:09:01.324705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.095 [2024-11-26 18:09:01.324742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.095 [2024-11-26 18:09:01.324813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.095 [2024-11-26 18:09:01.324833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.095 #30 NEW cov: 12496 ft: 13812 corp: 6/71b lim: 25 exec/s: 0 rss: 73Mb L: 12/16 MS: 4 CrossOver-InsertByte-ChangeBit-InsertRepeatedBytes- 00:12:24.095 [2024-11-26 18:09:01.385195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.095 [2024-11-26 18:09:01.385232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.095 [2024-11-26 18:09:01.385307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.095 [2024-11-26 18:09:01.385323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.095 [2024-11-26 18:09:01.385428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.095 [2024-11-26 18:09:01.385449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.095 #31 NEW cov: 12496 ft: 13878 corp: 7/86b lim: 25 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 ChangeBit- 00:12:24.095 [2024-11-26 18:09:01.444923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.095 [2024-11-26 18:09:01.444958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.095 #33 NEW cov: 12496 ft: 14283 corp: 8/95b lim: 25 exec/s: 0 rss: 73Mb L: 9/16 MS: 2 ChangeBit-InsertRepeatedBytes- 00:12:24.095 [2024-11-26 18:09:01.506478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.095 [2024-11-26 18:09:01.506514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.095 [2024-11-26 18:09:01.506618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.096 [2024-11-26 18:09:01.506639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.096 [2024-11-26 18:09:01.506741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.096 [2024-11-26 18:09:01.506758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.096 [2024-11-26 18:09:01.506847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:24.096 [2024-11-26 18:09:01.506868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.096 #34 NEW cov: 12496 ft: 14745 corp: 9/117b lim: 25 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:12:24.353 [2024-11-26 18:09:01.566498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.353 [2024-11-26 18:09:01.566534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.566629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.353 [2024-11-26 18:09:01.566650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.566738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.353 [2024-11-26 18:09:01.566758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.566869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:24.353 [2024-11-26 18:09:01.566889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.353 #35 NEW cov: 12496 ft: 14778 corp: 10/141b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 CopyPart- 00:12:24.353 [2024-11-26 18:09:01.656888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.353 [2024-11-26 18:09:01.656924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.657009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.353 [2024-11-26 18:09:01.657028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.657126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.353 [2024-11-26 18:09:01.657146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.353 [2024-11-26 18:09:01.657250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:24.353 [2024-11-26 18:09:01.657267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.354 #36 NEW cov: 12496 ft: 14810 corp: 11/165b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBinInt- 00:12:24.354 [2024-11-26 18:09:01.746895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.354 [2024-11-26 18:09:01.746928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.354 [2024-11-26 18:09:01.747032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.354 [2024-11-26 18:09:01.747052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.354 [2024-11-26 18:09:01.747156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.354 [2024-11-26 18:09:01.747174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.354 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:24.354 #37 NEW cov: 12519 ft: 14845 corp: 12/180b lim: 25 exec/s: 0 rss: 73Mb L: 15/24 MS: 1 ChangeBinInt- 00:12:24.612 [2024-11-26 18:09:01.807409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.612 [2024-11-26 18:09:01.807443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.612 [2024-11-26 18:09:01.807535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.612 [2024-11-26 18:09:01.807554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.612 [2024-11-26 18:09:01.807658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.612 [2024-11-26 18:09:01.807676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.612 #38 NEW cov: 12519 ft: 14881 corp: 13/198b lim: 25 exec/s: 38 rss: 74Mb L: 18/24 MS: 1 InsertRepeatedBytes- 00:12:24.612 [2024-11-26 18:09:01.886957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.612 [2024-11-26 18:09:01.886990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.612 #39 NEW cov: 12519 ft: 14915 corp: 14/207b lim: 25 exec/s: 39 rss: 74Mb L: 9/24 MS: 1 CopyPart- 00:12:24.612 [2024-11-26 18:09:01.978031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.612 [2024-11-26 18:09:01.978061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.612 [2024-11-26 18:09:01.978162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.612 [2024-11-26 18:09:01.978186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.612 [2024-11-26 18:09:01.978270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.612 [2024-11-26 18:09:01.978287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.612 #40 NEW cov: 12519 ft: 15004 corp: 15/222b lim: 25 exec/s: 40 rss: 74Mb L: 15/24 MS: 1 ChangeBit- 00:12:24.871 [2024-11-26 18:09:02.068049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.871 [2024-11-26 18:09:02.068080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.068170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.871 [2024-11-26 18:09:02.068188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.871 #41 NEW cov: 12519 ft: 15076 corp: 16/235b lim: 25 exec/s: 41 rss: 74Mb L: 13/24 MS: 1 CrossOver- 00:12:24.871 [2024-11-26 18:09:02.128451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.871 [2024-11-26 18:09:02.128482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.128565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.871 [2024-11-26 18:09:02.128584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.128683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.871 [2024-11-26 18:09:02.128698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.871 #42 NEW cov: 12519 ft: 15098 corp: 17/250b lim: 25 exec/s: 42 rss: 74Mb L: 15/24 MS: 1 ChangeByte- 00:12:24.871 [2024-11-26 18:09:02.188950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.871 [2024-11-26 18:09:02.188981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.189094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.871 [2024-11-26 18:09:02.189112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.189205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.871 [2024-11-26 18:09:02.189219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.189321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:24.871 [2024-11-26 18:09:02.189339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.871 #43 NEW cov: 12519 ft: 15106 corp: 18/274b lim: 25 exec/s: 43 rss: 74Mb L: 24/24 MS: 1 ChangeByte- 00:12:24.871 [2024-11-26 18:09:02.248952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.871 [2024-11-26 18:09:02.248983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.249071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.871 [2024-11-26 18:09:02.249090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.249167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.871 [2024-11-26 18:09:02.249186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.871 #44 NEW cov: 12519 ft: 15134 corp: 19/289b lim: 25 exec/s: 44 rss: 74Mb L: 15/24 MS: 1 CrossOver- 00:12:24.871 [2024-11-26 18:09:02.309040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:24.871 [2024-11-26 18:09:02.309070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.309154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:24.871 [2024-11-26 18:09:02.309174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.871 [2024-11-26 18:09:02.309270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:24.871 [2024-11-26 18:09:02.309289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.130 #45 NEW cov: 12519 ft: 15146 corp: 20/304b lim: 25 exec/s: 45 rss: 74Mb L: 15/24 MS: 1 ChangeByte- 00:12:25.130 [2024-11-26 18:09:02.368930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.130 [2024-11-26 18:09:02.368961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.130 #46 NEW cov: 12519 ft: 15162 corp: 21/313b lim: 25 exec/s: 46 rss: 74Mb L: 9/24 MS: 1 ChangeBinInt- 00:12:25.130 [2024-11-26 18:09:02.429866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.130 [2024-11-26 18:09:02.429896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.130 [2024-11-26 18:09:02.429991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:25.130 [2024-11-26 18:09:02.430009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.130 [2024-11-26 18:09:02.430111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:25.130 [2024-11-26 18:09:02.430129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.130 #47 NEW cov: 12519 ft: 15228 corp: 22/328b lim: 25 exec/s: 47 rss: 74Mb L: 15/24 MS: 1 ShuffleBytes- 00:12:25.130 [2024-11-26 18:09:02.509469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.130 [2024-11-26 18:09:02.509500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.130 #48 NEW cov: 12519 ft: 15249 corp: 23/336b lim: 25 exec/s: 48 rss: 74Mb L: 8/24 MS: 1 CrossOver- 00:12:25.389 [2024-11-26 18:09:02.590739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.389 [2024-11-26 18:09:02.590771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.590879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:25.389 [2024-11-26 18:09:02.590899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.591000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:25.389 [2024-11-26 18:09:02.591018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.591118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:25.389 [2024-11-26 18:09:02.591139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.389 #49 NEW cov: 12519 ft: 15271 corp: 24/358b lim: 25 exec/s: 49 rss: 74Mb L: 22/24 MS: 1 ChangeByte- 00:12:25.389 [2024-11-26 18:09:02.680325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.389 [2024-11-26 18:09:02.680360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.680445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:25.389 [2024-11-26 18:09:02.680465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.389 #50 NEW cov: 12519 ft: 15310 corp: 25/369b lim: 25 exec/s: 50 rss: 74Mb L: 11/24 MS: 1 EraseBytes- 00:12:25.389 [2024-11-26 18:09:02.740642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.389 [2024-11-26 18:09:02.740673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.389 #51 NEW cov: 12519 ft: 15319 corp: 26/378b lim: 25 exec/s: 51 rss: 74Mb L: 9/24 MS: 1 CopyPart- 00:12:25.389 [2024-11-26 18:09:02.831589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:25.389 [2024-11-26 18:09:02.831623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.831713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:25.389 [2024-11-26 18:09:02.831731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.389 [2024-11-26 18:09:02.831836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:25.389 [2024-11-26 18:09:02.831855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.649 #52 NEW cov: 12519 ft: 15329 corp: 27/393b lim: 25 exec/s: 26 rss: 74Mb L: 15/24 MS: 1 ChangeByte- 00:12:25.649 #52 DONE cov: 12519 ft: 15329 corp: 27/393b lim: 25 exec/s: 26 rss: 74Mb 00:12:25.649 Done 52 runs in 2 second(s) 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:25.649 18:09:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:12:25.649 [2024-11-26 18:09:02.995962] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:25.649 [2024-11-26 18:09:02.996009] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297011 ] 00:12:25.909 [2024-11-26 18:09:03.194778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.909 [2024-11-26 18:09:03.234535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.909 [2024-11-26 18:09:03.296907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.909 [2024-11-26 18:09:03.313076] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:12:25.909 INFO: Running with entropic power schedule (0xFF, 100). 00:12:25.909 INFO: Seed: 244864121 00:12:25.909 INFO: Loaded 1 modules (389659 inline 8-bit counters): 389659 [0x2c6d80c, 0x2ccca27), 00:12:25.909 INFO: Loaded 1 PC tables (389659 PCs): 389659 [0x2ccca28,0x32bebd8), 00:12:25.909 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:25.909 INFO: A corpus is not provided, starting from an empty corpus 00:12:25.909 #2 INITED exec/s: 0 rss: 66Mb 00:12:25.909 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:25.909 This may also happen if the target rejected all inputs we tried so far 00:12:26.168 [2024-11-26 18:09:03.382460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.382507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.382577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.382599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.382663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.382684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.382792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.382811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.168 NEW_FUNC[1/718]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:12:26.168 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:26.168 #13 NEW cov: 12362 ft: 12364 corp: 2/92b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:12:26.168 [2024-11-26 18:09:03.612989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.613035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.613098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.613118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.613188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.613207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.168 [2024-11-26 18:09:03.613296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.168 [2024-11-26 18:09:03.613314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.427 #14 NEW cov: 12477 ft: 13040 corp: 3/183b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 ChangeByte- 00:12:26.427 [2024-11-26 18:09:03.703198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16565899577774499301 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.703230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.703303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.703321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.703413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.703429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.427 #21 NEW cov: 12483 ft: 13632 corp: 4/261b lim: 100 exec/s: 0 rss: 74Mb L: 78/91 MS: 2 InsertByte-InsertRepeatedBytes- 00:12:26.427 [2024-11-26 18:09:03.763886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.763921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.764003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.764021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.764114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.764133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.764220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.764240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.427 #22 NEW cov: 12568 ft: 13852 corp: 5/352b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 CopyPart- 00:12:26.427 [2024-11-26 18:09:03.854307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.854336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.854425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.854444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.854495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.854514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.427 [2024-11-26 18:09:03.854579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.427 [2024-11-26 18:09:03.854601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.686 #23 NEW cov: 12568 ft: 13912 corp: 6/443b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 ShuffleBytes- 00:12:26.686 [2024-11-26 18:09:03.914642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.914672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:03.914781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.914800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:03.914891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.914905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:03.914994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.915011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.686 #24 NEW cov: 12568 ft: 13967 corp: 7/534b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 CopyPart- 00:12:26.686 [2024-11-26 18:09:03.974698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16565786328076838373 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.974728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:03.974818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.974837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:03.974925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:03.974939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.686 #25 NEW cov: 12568 ft: 14089 corp: 8/613b lim: 100 exec/s: 0 rss: 74Mb L: 79/91 MS: 1 InsertByte- 00:12:26.686 [2024-11-26 18:09:04.065316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069666308351 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.686 [2024-11-26 18:09:04.065348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.686 [2024-11-26 18:09:04.065438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.065455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.687 [2024-11-26 18:09:04.065545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.065559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.687 [2024-11-26 18:09:04.065645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.065666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.687 #30 NEW cov: 12568 ft: 14175 corp: 9/699b lim: 100 exec/s: 0 rss: 74Mb L: 86/91 MS: 5 ChangeBit-ChangeBit-CMP-CopyPart-InsertRepeatedBytes- DE: "\001\000"- 00:12:26.687 [2024-11-26 18:09:04.125885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.125918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.687 [2024-11-26 18:09:04.126017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.126034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.687 [2024-11-26 18:09:04.126116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.126131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.687 [2024-11-26 18:09:04.126214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.687 [2024-11-26 18:09:04.126230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.946 #31 NEW cov: 12568 ft: 14224 corp: 10/791b lim: 100 exec/s: 0 rss: 74Mb L: 92/92 MS: 1 InsertByte- 00:12:26.946 [2024-11-26 18:09:04.216249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.216279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.216384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.216402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.216490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.216505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.216605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.216622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.946 NEW_FUNC[1/1]: 0x1c47238 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:26.946 #32 NEW cov: 12591 ft: 14259 corp: 11/890b lim: 100 exec/s: 0 rss: 74Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:12:26.946 [2024-11-26 18:09:04.276719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069666308351 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.276751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.276850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.276870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.276957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.276972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.277062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.277081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.946 #33 NEW cov: 12591 ft: 14291 corp: 12/977b lim: 100 exec/s: 0 rss: 75Mb L: 87/99 MS: 1 InsertByte- 00:12:26.946 [2024-11-26 18:09:04.367314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.367346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.367454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.367473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.367557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.367571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.946 [2024-11-26 18:09:04.367662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.946 [2024-11-26 18:09:04.367681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.205 #34 NEW cov: 12591 ft: 14310 corp: 13/1069b lim: 100 exec/s: 34 rss: 75Mb L: 92/99 MS: 1 InsertByte- 00:12:27.205 [2024-11-26 18:09:04.457521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16565899577774499301 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.457553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.457633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.457654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.457727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.457744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.205 #35 NEW cov: 12591 ft: 14325 corp: 14/1147b lim: 100 exec/s: 35 rss: 75Mb L: 78/99 MS: 1 PersAutoDict- DE: "\001\000"- 00:12:27.205 [2024-11-26 18:09:04.518031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069666308351 len:61184 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.518062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.518152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.518171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.518258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.518276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.205 #36 NEW cov: 12591 ft: 14342 corp: 15/1226b lim: 100 exec/s: 36 rss: 75Mb L: 79/99 MS: 1 EraseBytes- 00:12:27.205 [2024-11-26 18:09:04.608582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16565899577774499301 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.608614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.608705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16565899579919558117 len:58652 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.608722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.205 [2024-11-26 18:09:04.608813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16565899579919558117 len:58854 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.205 [2024-11-26 18:09:04.608827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.205 #37 NEW cov: 12591 ft: 14348 corp: 16/1304b lim: 100 exec/s: 37 rss: 75Mb L: 78/99 MS: 1 ChangeBinInt- 00:12:27.463 [2024-11-26 18:09:04.669504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.669534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.669633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.669653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.669742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.669756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.669842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.669862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.463 #38 NEW cov: 12591 ft: 14409 corp: 17/1401b lim: 100 exec/s: 38 rss: 75Mb L: 97/99 MS: 1 CrossOver- 00:12:27.463 [2024-11-26 18:09:04.759185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16564204105074664830 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.759215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.759291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.759308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.463 #41 NEW cov: 12591 ft: 14779 corp: 18/1457b lim: 100 exec/s: 41 rss: 75Mb L: 56/99 MS: 3 CrossOver-InsertByte-CrossOver- 00:12:27.463 [2024-11-26 18:09:04.850606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.850639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.850739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15915685760777314271 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.850758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.463 [2024-11-26 18:09:04.850812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.463 [2024-11-26 18:09:04.850832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.464 [2024-11-26 18:09:04.850903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.464 [2024-11-26 18:09:04.850922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.464 #42 NEW cov: 12591 ft: 14791 corp: 19/1554b lim: 100 exec/s: 42 rss: 75Mb L: 97/99 MS: 1 ChangeBinInt- 00:12:27.722 [2024-11-26 18:09:04.941084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:04.941115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:04.941209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:04.941225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:04.941304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:04.941322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:04.941394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:04.941410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.722 #43 NEW cov: 12591 ft: 14809 corp: 20/1646b lim: 100 exec/s: 43 rss: 75Mb L: 92/99 MS: 1 CopyPart- 00:12:27.722 [2024-11-26 18:09:05.001930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.001961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.002076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.002094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.002174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.002193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.002277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.002295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.722 #44 NEW cov: 12591 ft: 14877 corp: 21/1737b lim: 100 exec/s: 44 rss: 75Mb L: 91/99 MS: 1 CopyPart- 00:12:27.722 [2024-11-26 18:09:05.092136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.092169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.092256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.092272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.092342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.092360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.722 [2024-11-26 18:09:05.092412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.722 [2024-11-26 18:09:05.092429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.722 #45 NEW cov: 12591 ft: 14894 corp: 22/1829b lim: 100 exec/s: 45 rss: 75Mb L: 92/99 MS: 1 CrossOver- 00:12:27.981 [2024-11-26 18:09:05.182927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16564204105074664830 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.182958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 [2024-11-26 18:09:05.183049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.183066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.981 [2024-11-26 18:09:05.183148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.183166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.981 [2024-11-26 18:09:05.183261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.183279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.981 #46 NEW cov: 12591 ft: 14913 corp: 23/1913b lim: 100 exec/s: 46 rss: 75Mb L: 84/99 MS: 1 CopyPart- 00:12:27.981 [2024-11-26 18:09:05.273607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.273639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 [2024-11-26 18:09:05.273726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.273744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.981 [2024-11-26 18:09:05.273803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-11-26 18:09:05.273820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.982 [2024-11-26 18:09:05.273884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.982 [2024-11-26 18:09:05.273907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.982 #47 NEW cov: 12591 ft: 14941 corp: 24/2011b lim: 100 exec/s: 47 rss: 76Mb L: 98/99 MS: 1 CopyPart- 00:12:27.982 [2024-11-26 18:09:05.333946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.982 [2024-11-26 18:09:05.333975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.982 [2024-11-26 18:09:05.334081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.982 [2024-11-26 18:09:05.334100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.982 [2024-11-26 18:09:05.334198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.982 [2024-11-26 18:09:05.334212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.982 [2024-11-26 18:09:05.334304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:16131858542891098079 len:57312 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.982 [2024-11-26 18:09:05.334322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.982 #48 NEW cov: 12591 ft: 14971 corp: 25/2102b lim: 100 exec/s: 24 rss: 76Mb L: 91/99 MS: 1 ChangeBit- 00:12:27.982 #48 DONE cov: 12591 ft: 14971 corp: 25/2102b lim: 100 exec/s: 24 rss: 76Mb 00:12:27.982 ###### Recommended dictionary. ###### 00:12:27.982 "\001\000" # Uses: 1 00:12:27.982 ###### End of recommended dictionary. ###### 00:12:27.982 Done 48 runs in 2 second(s) 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:12:28.240 00:12:28.240 real 1m4.045s 00:12:28.240 user 1m44.415s 00:12:28.240 sys 0m6.878s 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.240 18:09:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.240 ************************************ 00:12:28.240 END TEST nvmf_llvm_fuzz 00:12:28.240 ************************************ 00:12:28.240 18:09:05 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:12:28.240 18:09:05 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:12:28.240 18:09:05 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:28.240 18:09:05 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.240 18:09:05 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.240 18:09:05 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.240 ************************************ 00:12:28.240 START TEST vfio_llvm_fuzz 00:12:28.240 ************************************ 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:28.240 * Looking for test storage... 00:12:28.240 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.240 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.503 --rc genhtml_branch_coverage=1 00:12:28.503 --rc genhtml_function_coverage=1 00:12:28.503 --rc genhtml_legend=1 00:12:28.503 --rc geninfo_all_blocks=1 00:12:28.503 --rc geninfo_unexecuted_blocks=1 00:12:28.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.503 ' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.503 --rc genhtml_branch_coverage=1 00:12:28.503 --rc genhtml_function_coverage=1 00:12:28.503 --rc genhtml_legend=1 00:12:28.503 --rc geninfo_all_blocks=1 00:12:28.503 --rc geninfo_unexecuted_blocks=1 00:12:28.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.503 ' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.503 --rc genhtml_branch_coverage=1 00:12:28.503 --rc genhtml_function_coverage=1 00:12:28.503 --rc genhtml_legend=1 00:12:28.503 --rc geninfo_all_blocks=1 00:12:28.503 --rc geninfo_unexecuted_blocks=1 00:12:28.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.503 ' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.503 --rc genhtml_branch_coverage=1 00:12:28.503 --rc genhtml_function_coverage=1 00:12:28.503 --rc genhtml_legend=1 00:12:28.503 --rc geninfo_all_blocks=1 00:12:28.503 --rc geninfo_unexecuted_blocks=1 00:12:28.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.503 ' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:28.503 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:12:28.504 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:28.504 #define SPDK_CONFIG_H 00:12:28.504 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:28.504 #define SPDK_CONFIG_APPS 1 00:12:28.504 #define SPDK_CONFIG_ARCH native 00:12:28.504 #undef SPDK_CONFIG_ASAN 00:12:28.504 #undef SPDK_CONFIG_AVAHI 00:12:28.504 #undef SPDK_CONFIG_CET 00:12:28.504 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:28.504 #define SPDK_CONFIG_COVERAGE 1 00:12:28.504 #define SPDK_CONFIG_CROSS_PREFIX 00:12:28.504 #undef SPDK_CONFIG_CRYPTO 00:12:28.504 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:28.504 #undef SPDK_CONFIG_CUSTOMOCF 00:12:28.504 #undef SPDK_CONFIG_DAOS 00:12:28.504 #define SPDK_CONFIG_DAOS_DIR 00:12:28.504 #define SPDK_CONFIG_DEBUG 1 00:12:28.504 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:28.504 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:28.504 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:28.504 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:28.504 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:28.504 #undef SPDK_CONFIG_DPDK_UADK 00:12:28.504 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:28.504 #define SPDK_CONFIG_EXAMPLES 1 00:12:28.504 #undef SPDK_CONFIG_FC 00:12:28.504 #define SPDK_CONFIG_FC_PATH 00:12:28.504 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:28.504 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:28.504 #define SPDK_CONFIG_FSDEV 1 00:12:28.504 #undef SPDK_CONFIG_FUSE 00:12:28.504 #define SPDK_CONFIG_FUZZER 1 00:12:28.504 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:12:28.504 #undef SPDK_CONFIG_GOLANG 00:12:28.504 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:28.504 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:28.504 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:28.504 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:28.504 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:28.504 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:28.504 #undef SPDK_CONFIG_HAVE_LZ4 00:12:28.504 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:28.504 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:28.504 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:28.504 #define SPDK_CONFIG_IDXD 1 00:12:28.504 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:28.504 #undef SPDK_CONFIG_IPSEC_MB 00:12:28.504 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:28.504 #define SPDK_CONFIG_ISAL 1 00:12:28.504 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:28.504 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:28.504 #define SPDK_CONFIG_LIBDIR 00:12:28.504 #undef SPDK_CONFIG_LTO 00:12:28.504 #define SPDK_CONFIG_MAX_LCORES 128 00:12:28.504 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:28.504 #define SPDK_CONFIG_NVME_CUSE 1 00:12:28.504 #undef SPDK_CONFIG_OCF 00:12:28.504 #define SPDK_CONFIG_OCF_PATH 00:12:28.504 #define SPDK_CONFIG_OPENSSL_PATH 00:12:28.504 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:28.504 #define SPDK_CONFIG_PGO_DIR 00:12:28.504 #undef SPDK_CONFIG_PGO_USE 00:12:28.504 #define SPDK_CONFIG_PREFIX /usr/local 00:12:28.504 #undef SPDK_CONFIG_RAID5F 00:12:28.504 #undef SPDK_CONFIG_RBD 00:12:28.504 #define SPDK_CONFIG_RDMA 1 00:12:28.504 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:28.504 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:28.504 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:28.504 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:28.504 #undef SPDK_CONFIG_SHARED 00:12:28.504 #undef SPDK_CONFIG_SMA 00:12:28.504 #define SPDK_CONFIG_TESTS 1 00:12:28.504 #undef SPDK_CONFIG_TSAN 00:12:28.504 #define SPDK_CONFIG_UBLK 1 00:12:28.504 #define SPDK_CONFIG_UBSAN 1 00:12:28.504 #undef SPDK_CONFIG_UNIT_TESTS 00:12:28.504 #undef SPDK_CONFIG_URING 00:12:28.504 #define SPDK_CONFIG_URING_PATH 00:12:28.504 #undef SPDK_CONFIG_URING_ZNS 00:12:28.504 #undef SPDK_CONFIG_USDT 00:12:28.504 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:28.504 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:28.504 #define SPDK_CONFIG_VFIO_USER 1 00:12:28.504 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:28.504 #define SPDK_CONFIG_VHOST 1 00:12:28.504 #define SPDK_CONFIG_VIRTIO 1 00:12:28.504 #undef SPDK_CONFIG_VTUNE 00:12:28.505 #define SPDK_CONFIG_VTUNE_DIR 00:12:28.505 #define SPDK_CONFIG_WERROR 1 00:12:28.505 #define SPDK_CONFIG_WPDK_DIR 00:12:28.505 #undef SPDK_CONFIG_XNVME 00:12:28.505 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:12:28.505 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:28.506 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 3297457 ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 3297457 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fJwzMq 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.fJwzMq/tests/vfio /tmp/spdk.fJwzMq 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=1692594176 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3591835648 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=183505690624 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=195957915648 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12452225024 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=97974194176 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=97978957824 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=39185625088 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=39191584768 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5959680 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=97978650624 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=97978957824 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=307200 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=19595776000 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=19595788288 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:28.507 * Looking for test storage... 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=183505690624 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=14666817536 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.507 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.507 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.508 --rc genhtml_branch_coverage=1 00:12:28.508 --rc genhtml_function_coverage=1 00:12:28.508 --rc genhtml_legend=1 00:12:28.508 --rc geninfo_all_blocks=1 00:12:28.508 --rc geninfo_unexecuted_blocks=1 00:12:28.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.508 ' 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.508 --rc genhtml_branch_coverage=1 00:12:28.508 --rc genhtml_function_coverage=1 00:12:28.508 --rc genhtml_legend=1 00:12:28.508 --rc geninfo_all_blocks=1 00:12:28.508 --rc geninfo_unexecuted_blocks=1 00:12:28.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.508 ' 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.508 --rc genhtml_branch_coverage=1 00:12:28.508 --rc genhtml_function_coverage=1 00:12:28.508 --rc genhtml_legend=1 00:12:28.508 --rc geninfo_all_blocks=1 00:12:28.508 --rc geninfo_unexecuted_blocks=1 00:12:28.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.508 ' 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.508 --rc genhtml_branch_coverage=1 00:12:28.508 --rc genhtml_function_coverage=1 00:12:28.508 --rc genhtml_legend=1 00:12:28.508 --rc geninfo_all_blocks=1 00:12:28.508 --rc geninfo_unexecuted_blocks=1 00:12:28.508 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:28.508 ' 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:12:28.508 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:28.508 18:09:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:12:28.766 [2024-11-26 18:09:05.959349] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:28.766 [2024-11-26 18:09:05.959414] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297628 ] 00:12:28.766 [2024-11-26 18:09:06.049086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.766 [2024-11-26 18:09:06.098001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.024 INFO: Running with entropic power schedule (0xFF, 100). 00:12:29.024 INFO: Seed: 3227891302 00:12:29.024 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:29.024 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:29.024 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:29.024 INFO: A corpus is not provided, starting from an empty corpus 00:12:29.024 #2 INITED exec/s: 0 rss: 67Mb 00:12:29.024 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:29.024 This may also happen if the target rejected all inputs we tried so far 00:12:29.024 [2024-11-26 18:09:06.374077] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:12:29.282 NEW_FUNC[1/676]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:12:29.282 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:29.282 #9 NEW cov: 11220 ft: 11186 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 InsertByte-InsertRepeatedBytes- 00:12:29.540 #18 NEW cov: 11234 ft: 14074 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 ChangeBit-InsertRepeatedBytes-CopyPart-InsertByte- 00:12:29.540 #19 NEW cov: 11234 ft: 15699 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:12:29.799 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:29.799 #20 NEW cov: 11251 ft: 15798 corp: 5/25b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:30.057 #21 NEW cov: 11251 ft: 15938 corp: 6/31b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:12:30.057 #22 NEW cov: 11251 ft: 16335 corp: 7/37b lim: 6 exec/s: 22 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:12:30.315 #23 NEW cov: 11251 ft: 16389 corp: 8/43b lim: 6 exec/s: 23 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:12:30.315 #24 NEW cov: 11251 ft: 16625 corp: 9/49b lim: 6 exec/s: 24 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:12:30.573 #25 NEW cov: 11251 ft: 16815 corp: 10/55b lim: 6 exec/s: 25 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:30.573 #26 NEW cov: 11251 ft: 16963 corp: 11/61b lim: 6 exec/s: 26 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:12:30.831 #47 NEW cov: 11258 ft: 17017 corp: 12/67b lim: 6 exec/s: 47 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:12:30.831 #48 NEW cov: 11258 ft: 17049 corp: 13/73b lim: 6 exec/s: 48 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:12:31.090 #49 NEW cov: 11258 ft: 17228 corp: 14/79b lim: 6 exec/s: 24 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:12:31.090 #49 DONE cov: 11258 ft: 17228 corp: 14/79b lim: 6 exec/s: 24 rss: 77Mb 00:12:31.090 Done 49 runs in 2 second(s) 00:12:31.090 [2024-11-26 18:09:08.439608] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:12:31.349 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:31.349 18:09:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:12:31.349 [2024-11-26 18:09:08.739466] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:31.349 [2024-11-26 18:09:08.739526] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298545 ] 00:12:31.609 [2024-11-26 18:09:08.845971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.609 [2024-11-26 18:09:08.894205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.867 INFO: Running with entropic power schedule (0xFF, 100). 00:12:31.867 INFO: Seed: 1724904451 00:12:31.867 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:31.867 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:31.867 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:31.867 INFO: A corpus is not provided, starting from an empty corpus 00:12:31.867 #2 INITED exec/s: 0 rss: 67Mb 00:12:31.867 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:31.867 This may also happen if the target rejected all inputs we tried so far 00:12:31.867 [2024-11-26 18:09:09.164743] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:12:31.867 [2024-11-26 18:09:09.198429] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:31.867 [2024-11-26 18:09:09.198460] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:31.867 [2024-11-26 18:09:09.198477] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.126 NEW_FUNC[1/678]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:12:32.126 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:32.126 #26 NEW cov: 11204 ft: 11149 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 4 ShuffleBytes-InsertByte-ShuffleBytes-CopyPart- 00:12:32.126 [2024-11-26 18:09:09.475880] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.126 [2024-11-26 18:09:09.475916] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.126 [2024-11-26 18:09:09.475934] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.126 #27 NEW cov: 11220 ft: 14642 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:12:32.385 [2024-11-26 18:09:09.632072] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.385 [2024-11-26 18:09:09.632100] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.385 [2024-11-26 18:09:09.632117] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.385 #28 NEW cov: 11220 ft: 16252 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:12:32.385 [2024-11-26 18:09:09.788083] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.385 [2024-11-26 18:09:09.788108] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.385 [2024-11-26 18:09:09.788130] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.644 #29 NEW cov: 11220 ft: 16908 corp: 5/17b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:12:32.644 [2024-11-26 18:09:09.944249] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.644 [2024-11-26 18:09:09.944275] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.644 [2024-11-26 18:09:09.944291] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.644 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:32.644 #45 NEW cov: 11237 ft: 17019 corp: 6/21b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:12:32.903 [2024-11-26 18:09:10.102384] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.903 [2024-11-26 18:09:10.102421] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.903 [2024-11-26 18:09:10.102438] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:32.903 #46 NEW cov: 11240 ft: 17208 corp: 7/25b lim: 4 exec/s: 46 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:12:32.903 [2024-11-26 18:09:10.260285] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:32.903 [2024-11-26 18:09:10.260312] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:32.903 [2024-11-26 18:09:10.260328] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.162 #47 NEW cov: 11240 ft: 17391 corp: 8/29b lim: 4 exec/s: 47 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:12:33.162 [2024-11-26 18:09:10.418213] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:33.162 [2024-11-26 18:09:10.418238] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:33.162 [2024-11-26 18:09:10.418253] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.162 #48 NEW cov: 11240 ft: 17658 corp: 9/33b lim: 4 exec/s: 48 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:12:33.162 [2024-11-26 18:09:10.576232] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:33.162 [2024-11-26 18:09:10.576258] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:33.162 [2024-11-26 18:09:10.576276] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.420 #49 NEW cov: 11240 ft: 17701 corp: 10/37b lim: 4 exec/s: 49 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:12:33.420 [2024-11-26 18:09:10.733468] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:33.420 [2024-11-26 18:09:10.733492] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:33.420 [2024-11-26 18:09:10.733509] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.420 #55 NEW cov: 11240 ft: 17771 corp: 11/41b lim: 4 exec/s: 55 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:12:33.679 [2024-11-26 18:09:10.889892] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:33.679 [2024-11-26 18:09:10.889917] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:33.679 [2024-11-26 18:09:10.889933] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.679 #56 NEW cov: 11247 ft: 17797 corp: 12/45b lim: 4 exec/s: 56 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:12:33.679 [2024-11-26 18:09:11.046103] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:33.679 [2024-11-26 18:09:11.046130] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:33.679 [2024-11-26 18:09:11.046147] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:33.938 #57 NEW cov: 11247 ft: 18212 corp: 13/49b lim: 4 exec/s: 28 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:12:33.938 #57 DONE cov: 11247 ft: 18212 corp: 13/49b lim: 4 exec/s: 28 rss: 76Mb 00:12:33.938 Done 57 runs in 2 second(s) 00:12:33.938 [2024-11-26 18:09:11.162589] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:12:34.197 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:34.197 18:09:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:12:34.197 [2024-11-26 18:09:11.458358] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:34.197 [2024-11-26 18:09:11.458446] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299077 ] 00:12:34.197 [2024-11-26 18:09:11.545687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.197 [2024-11-26 18:09:11.594615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.457 INFO: Running with entropic power schedule (0xFF, 100). 00:12:34.457 INFO: Seed: 132936714 00:12:34.457 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:34.457 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:34.457 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:34.457 INFO: A corpus is not provided, starting from an empty corpus 00:12:34.457 #2 INITED exec/s: 0 rss: 67Mb 00:12:34.457 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:34.457 This may also happen if the target rejected all inputs we tried so far 00:12:34.457 [2024-11-26 18:09:11.865296] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:12:34.715 [2024-11-26 18:09:11.936780] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:34.974 NEW_FUNC[1/677]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:12:34.974 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:34.974 #17 NEW cov: 11199 ft: 10865 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 ChangeBit-InsertRepeatedBytes-ShuffleBytes-ShuffleBytes-InsertByte- 00:12:34.974 [2024-11-26 18:09:12.305858] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:35.233 #23 NEW cov: 11213 ft: 13588 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:12:35.233 [2024-11-26 18:09:12.503993] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:35.233 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:35.233 #24 NEW cov: 11230 ft: 14516 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:12:35.492 [2024-11-26 18:09:12.701913] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:35.492 #30 NEW cov: 11230 ft: 15237 corp: 5/33b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:12:35.492 [2024-11-26 18:09:12.904191] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:35.751 #31 NEW cov: 11230 ft: 16263 corp: 6/41b lim: 8 exec/s: 31 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:12:35.751 [2024-11-26 18:09:13.098111] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:36.010 #32 NEW cov: 11230 ft: 16340 corp: 7/49b lim: 8 exec/s: 32 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:12:36.010 [2024-11-26 18:09:13.299859] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:36.010 #37 NEW cov: 11230 ft: 16526 corp: 8/57b lim: 8 exec/s: 37 rss: 76Mb L: 8/8 MS: 5 CrossOver-InsertRepeatedBytes-ShuffleBytes-InsertByte-InsertByte- 00:12:36.269 [2024-11-26 18:09:13.495412] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:36.269 #38 NEW cov: 11237 ft: 17119 corp: 9/65b lim: 8 exec/s: 38 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:12:36.269 [2024-11-26 18:09:13.695114] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:36.527 #39 NEW cov: 11237 ft: 17178 corp: 10/73b lim: 8 exec/s: 39 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:12:36.527 [2024-11-26 18:09:13.893947] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:36.786 #40 NEW cov: 11237 ft: 17286 corp: 11/81b lim: 8 exec/s: 20 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:12:36.786 #40 DONE cov: 11237 ft: 17286 corp: 11/81b lim: 8 exec/s: 20 rss: 77Mb 00:12:36.786 Done 40 runs in 2 second(s) 00:12:36.786 [2024-11-26 18:09:14.033581] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:12:37.045 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:37.045 18:09:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:12:37.045 [2024-11-26 18:09:14.340067] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:37.045 [2024-11-26 18:09:14.340150] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299594 ] 00:12:37.046 [2024-11-26 18:09:14.426546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.046 [2024-11-26 18:09:14.475631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.304 INFO: Running with entropic power schedule (0xFF, 100). 00:12:37.304 INFO: Seed: 3009942083 00:12:37.304 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:37.305 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:37.305 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:37.305 INFO: A corpus is not provided, starting from an empty corpus 00:12:37.305 #2 INITED exec/s: 0 rss: 67Mb 00:12:37.305 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:37.305 This may also happen if the target rejected all inputs we tried so far 00:12:37.305 [2024-11-26 18:09:14.742183] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:12:37.563 NEW_FUNC[1/677]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:12:37.563 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:37.563 #257 NEW cov: 11204 ft: 11146 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 CopyPart-InsertRepeatedBytes-ChangeBinInt-ShuffleBytes-InsertByte- 00:12:37.821 #262 NEW cov: 11221 ft: 13922 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 5 EraseBytes-InsertByte-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:12:38.080 #268 NEW cov: 11221 ft: 15358 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:12:38.080 #269 NEW cov: 11221 ft: 15717 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:38.339 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:38.339 #270 NEW cov: 11238 ft: 16137 corp: 6/161b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:38.339 #271 NEW cov: 11238 ft: 16390 corp: 7/193b lim: 32 exec/s: 271 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:38.598 #272 NEW cov: 11238 ft: 16638 corp: 8/225b lim: 32 exec/s: 272 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:38.856 #273 NEW cov: 11238 ft: 16852 corp: 9/257b lim: 32 exec/s: 273 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:38.856 #274 NEW cov: 11238 ft: 17351 corp: 10/289b lim: 32 exec/s: 274 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:12:39.115 #275 NEW cov: 11238 ft: 17412 corp: 11/321b lim: 32 exec/s: 275 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:39.115 #276 NEW cov: 11245 ft: 17589 corp: 12/353b lim: 32 exec/s: 276 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:39.375 [2024-11-26 18:09:16.604066] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18374967516561670144 > max 8796093022208 00:12:39.375 [2024-11-26 18:09:16.604110] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xff00ff9a00000000) offset=0x72c00ffffff flags=0x3: No space left on device 00:12:39.375 [2024-11-26 18:09:16.604121] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:12:39.375 [2024-11-26 18:09:16.604137] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:39.375 NEW_FUNC[1/1]: 0x1590738 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3098 00:12:39.375 #282 NEW cov: 11256 ft: 17734 corp: 13/385b lim: 32 exec/s: 141 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:12:39.375 #282 DONE cov: 11256 ft: 17734 corp: 13/385b lim: 32 exec/s: 141 rss: 77Mb 00:12:39.375 Done 282 runs in 2 second(s) 00:12:39.375 [2024-11-26 18:09:16.722585] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:12:39.634 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:39.634 18:09:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:12:39.634 [2024-11-26 18:09:17.021622] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:39.634 [2024-11-26 18:09:17.021703] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300036 ] 00:12:39.894 [2024-11-26 18:09:17.109146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.894 [2024-11-26 18:09:17.158317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.152 INFO: Running with entropic power schedule (0xFF, 100). 00:12:40.152 INFO: Seed: 1396983486 00:12:40.152 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:40.152 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:40.152 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:40.152 INFO: A corpus is not provided, starting from an empty corpus 00:12:40.152 #2 INITED exec/s: 0 rss: 67Mb 00:12:40.152 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:40.152 This may also happen if the target rejected all inputs we tried so far 00:12:40.152 [2024-11-26 18:09:17.428055] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:12:40.411 NEW_FUNC[1/676]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:12:40.411 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:40.411 #160 NEW cov: 11197 ft: 11055 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 3 InsertRepeatedBytes-ChangeByte-InsertByte- 00:12:40.411 NEW_FUNC[1/1]: 0x15e0a08 in vfio_user_poll_vfu_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:4911 00:12:40.411 #166 NEW cov: 11219 ft: 14335 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:12:40.669 #167 NEW cov: 11219 ft: 16003 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:12:40.927 #168 NEW cov: 11219 ft: 16398 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:40.927 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:40.927 #179 NEW cov: 11236 ft: 16623 corp: 6/161b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:41.183 #180 NEW cov: 11236 ft: 16774 corp: 7/193b lim: 32 exec/s: 180 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:41.183 #186 NEW cov: 11236 ft: 16833 corp: 8/225b lim: 32 exec/s: 186 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:41.440 #187 NEW cov: 11236 ft: 16942 corp: 9/257b lim: 32 exec/s: 187 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:12:41.440 #188 NEW cov: 11236 ft: 17022 corp: 10/289b lim: 32 exec/s: 188 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:41.698 #189 NEW cov: 11236 ft: 17070 corp: 11/321b lim: 32 exec/s: 189 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:12:41.956 #190 NEW cov: 11243 ft: 17147 corp: 12/353b lim: 32 exec/s: 190 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:41.956 #196 NEW cov: 11243 ft: 17171 corp: 13/385b lim: 32 exec/s: 196 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:42.214 #197 NEW cov: 11243 ft: 17327 corp: 14/417b lim: 32 exec/s: 98 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:12:42.214 #197 DONE cov: 11243 ft: 17327 corp: 14/417b lim: 32 exec/s: 98 rss: 77Mb 00:12:42.214 Done 197 runs in 2 second(s) 00:12:42.214 [2024-11-26 18:09:19.516586] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:12:42.473 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:42.473 18:09:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:12:42.473 [2024-11-26 18:09:19.809145] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:42.473 [2024-11-26 18:09:19.809208] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300505 ] 00:12:42.473 [2024-11-26 18:09:19.899731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.731 [2024-11-26 18:09:19.949942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.731 INFO: Running with entropic power schedule (0xFF, 100). 00:12:42.731 INFO: Seed: 4186970561 00:12:42.731 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:42.731 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:42.990 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:12:42.990 INFO: A corpus is not provided, starting from an empty corpus 00:12:42.990 #2 INITED exec/s: 0 rss: 67Mb 00:12:42.990 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:42.990 This may also happen if the target rejected all inputs we tried so far 00:12:42.990 [2024-11-26 18:09:20.223090] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:12:42.990 [2024-11-26 18:09:20.260457] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:42.990 [2024-11-26 18:09:20.260494] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:43.248 NEW_FUNC[1/678]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:12:43.248 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:43.248 #39 NEW cov: 11211 ft: 11182 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 2 ChangeByte-InsertRepeatedBytes- 00:12:43.248 [2024-11-26 18:09:20.543109] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:43.248 [2024-11-26 18:09:20.543154] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:43.248 #50 NEW cov: 11225 ft: 14126 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:12:43.506 [2024-11-26 18:09:20.708638] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:43.506 [2024-11-26 18:09:20.708671] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:43.506 #51 NEW cov: 11225 ft: 15295 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:12:43.506 [2024-11-26 18:09:20.871176] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:43.506 [2024-11-26 18:09:20.871206] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:43.765 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:43.765 #52 NEW cov: 11242 ft: 15784 corp: 5/53b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:12:43.765 [2024-11-26 18:09:21.033548] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:43.765 [2024-11-26 18:09:21.033583] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:43.765 #53 NEW cov: 11242 ft: 16206 corp: 6/66b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:12:43.765 [2024-11-26 18:09:21.197933] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:43.765 [2024-11-26 18:09:21.197966] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.023 #54 NEW cov: 11242 ft: 16513 corp: 7/79b lim: 13 exec/s: 54 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:12:44.023 [2024-11-26 18:09:21.361827] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.023 [2024-11-26 18:09:21.361860] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.023 #55 NEW cov: 11242 ft: 16592 corp: 8/92b lim: 13 exec/s: 55 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:12:44.282 [2024-11-26 18:09:21.525284] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.282 [2024-11-26 18:09:21.525317] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.282 #56 NEW cov: 11245 ft: 16655 corp: 9/105b lim: 13 exec/s: 56 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:12:44.282 [2024-11-26 18:09:21.688891] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.282 [2024-11-26 18:09:21.688923] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.541 #57 NEW cov: 11245 ft: 16686 corp: 10/118b lim: 13 exec/s: 57 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:12:44.541 [2024-11-26 18:09:21.853859] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.541 [2024-11-26 18:09:21.853891] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.541 #58 NEW cov: 11245 ft: 16733 corp: 11/131b lim: 13 exec/s: 58 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:12:44.800 [2024-11-26 18:09:22.017921] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.800 [2024-11-26 18:09:22.017954] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:44.800 #61 NEW cov: 11252 ft: 16754 corp: 12/144b lim: 13 exec/s: 61 rss: 77Mb L: 13/13 MS: 3 EraseBytes-CopyPart-InsertByte- 00:12:44.800 [2024-11-26 18:09:22.181367] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:44.800 [2024-11-26 18:09:22.181421] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:45.059 #62 NEW cov: 11252 ft: 16793 corp: 13/157b lim: 13 exec/s: 31 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:12:45.059 #62 DONE cov: 11252 ft: 16793 corp: 13/157b lim: 13 exec/s: 31 rss: 77Mb 00:12:45.059 Done 62 runs in 2 second(s) 00:12:45.059 [2024-11-26 18:09:22.298592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:12:45.319 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:45.319 18:09:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:12:45.319 [2024-11-26 18:09:22.600455] Starting SPDK v25.01-pre git sha1 f7ce15267 / DPDK 24.03.0 initialization... 00:12:45.319 [2024-11-26 18:09:22.600518] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300952 ] 00:12:45.319 [2024-11-26 18:09:22.688363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.319 [2024-11-26 18:09:22.737777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.578 INFO: Running with entropic power schedule (0xFF, 100). 00:12:45.578 INFO: Seed: 2688017987 00:12:45.578 INFO: Loaded 1 modules (386895 inline 8-bit counters): 386895 [0x2c2f00c, 0x2c8d75b), 00:12:45.578 INFO: Loaded 1 PC tables (386895 PCs): 386895 [0x2c8d760,0x3274c50), 00:12:45.578 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:12:45.578 INFO: A corpus is not provided, starting from an empty corpus 00:12:45.578 #2 INITED exec/s: 0 rss: 67Mb 00:12:45.578 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:45.578 This may also happen if the target rejected all inputs we tried so far 00:12:45.578 [2024-11-26 18:09:23.009135] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:12:45.849 [2024-11-26 18:09:23.052434] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:45.849 [2024-11-26 18:09:23.052468] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:45.849 NEW_FUNC[1/677]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:12:45.849 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:45.849 #20 NEW cov: 11187 ft: 11178 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:12:46.108 [2024-11-26 18:09:23.330603] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.108 [2024-11-26 18:09:23.330653] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.108 NEW_FUNC[1/1]: 0x1467968 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:12:46.108 #21 NEW cov: 11217 ft: 14136 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:12:46.108 [2024-11-26 18:09:23.489174] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.108 [2024-11-26 18:09:23.489208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.367 #22 NEW cov: 11217 ft: 15686 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:12:46.367 [2024-11-26 18:09:23.649362] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.367 [2024-11-26 18:09:23.649403] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.367 NEW_FUNC[1/1]: 0x1c13688 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:46.367 #23 NEW cov: 11234 ft: 16032 corp: 5/37b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:12:46.367 [2024-11-26 18:09:23.809305] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.367 [2024-11-26 18:09:23.809338] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.626 #24 NEW cov: 11234 ft: 16135 corp: 6/46b lim: 9 exec/s: 0 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:12:46.626 [2024-11-26 18:09:23.967611] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.626 [2024-11-26 18:09:23.967645] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.626 #25 NEW cov: 11234 ft: 16205 corp: 7/55b lim: 9 exec/s: 25 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:12:46.886 [2024-11-26 18:09:24.126083] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.886 [2024-11-26 18:09:24.126117] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:46.886 #26 NEW cov: 11237 ft: 16409 corp: 8/64b lim: 9 exec/s: 26 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:12:46.886 [2024-11-26 18:09:24.286043] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:46.886 [2024-11-26 18:09:24.286076] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:47.146 #27 NEW cov: 11237 ft: 16491 corp: 9/73b lim: 9 exec/s: 27 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:12:47.146 [2024-11-26 18:09:24.446653] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:47.146 [2024-11-26 18:09:24.446684] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:47.146 #28 NEW cov: 11237 ft: 16529 corp: 10/82b lim: 9 exec/s: 28 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:12:47.406 [2024-11-26 18:09:24.605721] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:47.406 [2024-11-26 18:09:24.605753] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:47.406 #29 NEW cov: 11237 ft: 16625 corp: 11/91b lim: 9 exec/s: 29 rss: 77Mb L: 9/9 MS: 1 ChangeByte- 00:12:47.406 [2024-11-26 18:09:24.764923] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:47.406 [2024-11-26 18:09:24.764957] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:47.665 #39 NEW cov: 11244 ft: 16763 corp: 12/100b lim: 9 exec/s: 39 rss: 77Mb L: 9/9 MS: 5 InsertRepeatedBytes-ChangeBinInt-ShuffleBytes-EraseBytes-CopyPart- 00:12:47.665 [2024-11-26 18:09:24.933728] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:12:47.665 [2024-11-26 18:09:24.933759] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:12:47.665 #40 NEW cov: 11244 ft: 16779 corp: 13/109b lim: 9 exec/s: 20 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:12:47.665 #40 DONE cov: 11244 ft: 16779 corp: 13/109b lim: 9 exec/s: 20 rss: 77Mb 00:12:47.665 Done 40 runs in 2 second(s) 00:12:47.665 [2024-11-26 18:09:25.046587] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:12:47.924 00:12:47.924 real 0m19.760s 00:12:47.924 user 0m28.159s 00:12:47.924 sys 0m1.898s 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.924 18:09:25 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:47.924 ************************************ 00:12:47.924 END TEST vfio_llvm_fuzz 00:12:47.924 ************************************ 00:12:47.924 00:12:47.924 real 1m24.103s 00:12:47.924 user 2m12.709s 00:12:47.924 sys 0m8.958s 00:12:47.924 18:09:25 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.924 18:09:25 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:47.924 ************************************ 00:12:47.924 END TEST llvm_fuzz 00:12:47.924 ************************************ 00:12:48.183 18:09:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:12:48.183 18:09:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:12:48.183 18:09:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:12:48.183 18:09:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.183 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:12:48.183 18:09:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:12:48.183 18:09:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:12:48.183 18:09:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:12:48.183 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:12:53.459 INFO: APP EXITING 00:12:53.459 INFO: killing all VMs 00:12:53.459 INFO: killing vhost app 00:12:53.459 WARN: no vhost pid file found 00:12:53.459 INFO: EXIT DONE 00:12:55.994 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:12:55.994 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:12:55.994 Waiting for block devices as requested 00:12:55.994 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:55.994 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:55.994 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:56.254 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:56.254 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:56.254 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:56.513 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:56.513 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:56.513 0000:d9:00.0 (8086 0a54): vfio-pci -> nvme 00:12:56.772 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:56.772 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:56.772 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:57.032 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:57.032 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:57.032 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:57.291 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:57.291 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:59.827 0000:5d:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:5d:05.5 00:12:59.827 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:12:59.827 Cleaning 00:12:59.827 Removing: /dev/shm/spdk_tgt_trace.pid3271882 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3268734 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3270206 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3271882 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3272383 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3273384 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3273644 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3274718 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3274731 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3275156 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3275471 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3275786 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3276119 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3276435 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3276712 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3276945 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3277233 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3277877 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3281431 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3281718 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3282001 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3282010 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3282528 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3282560 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283005 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283110 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283392 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283406 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283694 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3283700 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3284309 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3284587 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3284860 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3284942 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3285672 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3286187 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3286594 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3286985 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3287509 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3288030 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3288552 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3288899 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3289344 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3289860 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3290375 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3290757 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3291173 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3291687 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3292208 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3292720 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3293061 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3293517 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3294036 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3294549 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3294952 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3295347 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3295860 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3296417 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3297011 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3297628 00:12:59.827 Removing: /var/run/dpdk/spdk_pid3298545 00:13:00.087 Removing: /var/run/dpdk/spdk_pid3299077 00:13:00.087 Removing: /var/run/dpdk/spdk_pid3299594 00:13:00.087 Removing: /var/run/dpdk/spdk_pid3300036 00:13:00.087 Removing: /var/run/dpdk/spdk_pid3300505 00:13:00.087 Removing: /var/run/dpdk/spdk_pid3300952 00:13:00.087 Clean 00:13:00.087 18:09:37 -- common/autotest_common.sh@1453 -- # return 0 00:13:00.087 18:09:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:13:00.087 18:09:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.087 18:09:37 -- common/autotest_common.sh@10 -- # set +x 00:13:00.087 18:09:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:13:00.087 18:09:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.087 18:09:37 -- common/autotest_common.sh@10 -- # set +x 00:13:00.087 18:09:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:13:00.087 18:09:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:13:00.087 18:09:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:13:00.087 18:09:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:13:00.087 18:09:37 -- spdk/autotest.sh@398 -- # hostname 00:13:00.087 18:09:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-66 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:13:00.346 geninfo: WARNING: invalid characters removed from testname! 00:13:06.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:13:07.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:13:13.140 18:09:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:25.400 18:10:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:31.971 18:10:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:40.111 18:10:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:46.733 18:10:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:55.005 18:10:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:14:01.617 18:10:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:14:01.617 18:10:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:14:01.617 18:10:38 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:14:01.617 18:10:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:14:01.617 18:10:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:14:01.617 18:10:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:14:01.617 + [[ -n 3162745 ]] 00:14:01.617 + sudo kill 3162745 00:14:01.627 [Pipeline] } 00:14:01.643 [Pipeline] // stage 00:14:01.649 [Pipeline] } 00:14:01.664 [Pipeline] // timeout 00:14:01.670 [Pipeline] } 00:14:01.683 [Pipeline] // catchError 00:14:01.687 [Pipeline] } 00:14:01.701 [Pipeline] // wrap 00:14:01.708 [Pipeline] } 00:14:01.722 [Pipeline] // catchError 00:14:01.731 [Pipeline] stage 00:14:01.733 [Pipeline] { (Epilogue) 00:14:01.747 [Pipeline] catchError 00:14:01.749 [Pipeline] { 00:14:01.764 [Pipeline] echo 00:14:01.766 Cleanup processes 00:14:01.772 [Pipeline] sh 00:14:02.058 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:02.058 3308563 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:02.071 [Pipeline] sh 00:14:02.355 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:02.355 ++ grep -v 'sudo pgrep' 00:14:02.355 ++ awk '{print $1}' 00:14:02.355 + sudo kill -9 00:14:02.355 + true 00:14:02.390 [Pipeline] sh 00:14:02.681 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:14:20.808 [Pipeline] sh 00:14:21.093 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:14:21.093 Artifacts sizes are good 00:14:21.109 [Pipeline] archiveArtifacts 00:14:21.117 Archiving artifacts 00:14:21.256 [Pipeline] sh 00:14:21.542 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:14:21.558 [Pipeline] cleanWs 00:14:21.568 [WS-CLEANUP] Deleting project workspace... 00:14:21.568 [WS-CLEANUP] Deferred wipeout is used... 00:14:21.575 [WS-CLEANUP] done 00:14:21.577 [Pipeline] } 00:14:21.600 [Pipeline] // catchError 00:14:21.614 [Pipeline] sh 00:14:21.896 + logger -p user.info -t JENKINS-CI 00:14:21.902 [Pipeline] } 00:14:21.910 [Pipeline] // stage 00:14:21.914 [Pipeline] } 00:14:21.923 [Pipeline] // node 00:14:21.926 [Pipeline] End of Pipeline 00:14:21.955 Finished: SUCCESS