00:00:00.001 Started by upstream project "autotest-nightly" build number 4149 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3511 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.133 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.133 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.720 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.732 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.743 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:03.743 > git config core.sparsecheckout # timeout=10 00:00:03.755 > git read-tree -mu HEAD # timeout=10 00:00:03.767 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:03.788 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:03.788 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:03.912 [Pipeline] Start of Pipeline 00:00:03.956 [Pipeline] library 00:00:03.957 Loading library shm_lib@master 00:00:03.957 Library shm_lib@master is cached. Copying from home. 00:00:03.969 [Pipeline] node 00:00:03.984 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.985 [Pipeline] { 00:00:03.991 [Pipeline] catchError 00:00:03.992 [Pipeline] { 00:00:03.999 [Pipeline] wrap 00:00:04.003 [Pipeline] { 00:00:04.008 [Pipeline] stage 00:00:04.010 [Pipeline] { (Prologue) 00:00:04.191 [Pipeline] sh 00:00:04.479 + logger -p user.info -t JENKINS-CI 00:00:04.500 [Pipeline] echo 00:00:04.502 Node: WFP20 00:00:04.510 [Pipeline] sh 00:00:04.808 [Pipeline] setCustomBuildProperty 00:00:04.819 [Pipeline] echo 00:00:04.820 Cleanup processes 00:00:04.824 [Pipeline] sh 00:00:05.103 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.103 1349892 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.119 [Pipeline] sh 00:00:05.407 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.407 ++ grep -v 'sudo pgrep' 00:00:05.407 ++ awk '{print $1}' 00:00:05.407 + sudo kill -9 00:00:05.407 + true 00:00:05.420 [Pipeline] cleanWs 00:00:05.457 [WS-CLEANUP] Deleting project workspace... 00:00:05.457 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.463 [WS-CLEANUP] done 00:00:05.466 [Pipeline] setCustomBuildProperty 00:00:05.480 [Pipeline] sh 00:00:05.764 + sudo git config --global --replace-all safe.directory '*' 00:00:05.881 [Pipeline] httpRequest 00:00:08.917 [Pipeline] echo 00:00:08.921 Sorcerer 10.211.164.20 is dead 00:00:08.928 [Pipeline] httpRequest 00:00:11.944 [Pipeline] echo 00:00:11.945 Sorcerer 10.211.164.101 is dead 00:00:11.953 [Pipeline] httpRequest 00:00:12.181 [Pipeline] echo 00:00:12.183 Sorcerer 10.211.164.96 is dead 00:00:12.190 [Pipeline] httpRequest 00:00:15.209 [Pipeline] echo 00:00:15.210 Sorcerer 10.211.164.20 is dead 00:00:15.219 [Pipeline] httpRequest 00:00:15.570 [Pipeline] echo 00:00:15.571 Sorcerer 10.211.164.23 is alive 00:00:15.582 [Pipeline] retry 00:00:15.584 [Pipeline] { 00:00:15.598 [Pipeline] httpRequest 00:00:15.603 HttpMethod: GET 00:00:15.603 URL: http://10.211.164.23/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:15.604 Sending request to url: http://10.211.164.23/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:15.606 Response Code: HTTP/1.1 200 OK 00:00:15.606 Success: Status code 200 is in the accepted range: 200,404 00:00:15.607 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:15.753 [Pipeline] } 00:00:15.770 [Pipeline] // retry 00:00:15.777 [Pipeline] sh 00:00:16.065 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:16.084 [Pipeline] httpRequest 00:00:16.427 [Pipeline] echo 00:00:16.429 Sorcerer 10.211.164.23 is alive 00:00:16.439 [Pipeline] retry 00:00:16.441 [Pipeline] { 00:00:16.456 [Pipeline] httpRequest 00:00:16.460 HttpMethod: GET 00:00:16.461 URL: http://10.211.164.23/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:16.461 Sending request to url: http://10.211.164.23/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:16.463 Response Code: HTTP/1.1 200 OK 00:00:16.464 Success: Status code 200 is in the accepted range: 200,404 00:00:16.464 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:27.620 [Pipeline] } 00:00:27.643 [Pipeline] // retry 00:00:27.652 [Pipeline] sh 00:00:27.938 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:30.485 [Pipeline] sh 00:00:30.770 + git -C spdk log --oneline -n5 00:00:30.770 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:00:30.770 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:00:30.770 82c46626a lib/event: implement scheduler trace events 00:00:30.770 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:00:30.770 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:00:30.782 [Pipeline] } 00:00:30.797 [Pipeline] // stage 00:00:30.807 [Pipeline] stage 00:00:30.809 [Pipeline] { (Prepare) 00:00:30.826 [Pipeline] writeFile 00:00:30.843 [Pipeline] sh 00:00:31.128 + logger -p user.info -t JENKINS-CI 00:00:31.141 [Pipeline] sh 00:00:31.426 + logger -p user.info -t JENKINS-CI 00:00:31.439 [Pipeline] sh 00:00:31.723 + cat autorun-spdk.conf 00:00:31.723 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.723 SPDK_TEST_FUZZER_SHORT=1 00:00:31.723 SPDK_TEST_FUZZER=1 00:00:31.723 SPDK_TEST_SETUP=1 00:00:31.723 SPDK_RUN_UBSAN=1 00:00:31.731 RUN_NIGHTLY=1 00:00:31.735 [Pipeline] readFile 00:00:31.759 [Pipeline] withEnv 00:00:31.761 [Pipeline] { 00:00:31.772 [Pipeline] sh 00:00:32.057 + set -ex 00:00:32.057 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:32.057 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.057 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.057 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.057 ++ SPDK_TEST_FUZZER=1 00:00:32.057 ++ SPDK_TEST_SETUP=1 00:00:32.057 ++ SPDK_RUN_UBSAN=1 00:00:32.057 ++ RUN_NIGHTLY=1 00:00:32.057 + case $SPDK_TEST_NVMF_NICS in 00:00:32.057 + DRIVERS= 00:00:32.057 + [[ -n '' ]] 00:00:32.057 + exit 0 00:00:32.067 [Pipeline] } 00:00:32.082 [Pipeline] // withEnv 00:00:32.087 [Pipeline] } 00:00:32.101 [Pipeline] // stage 00:00:32.111 [Pipeline] catchError 00:00:32.113 [Pipeline] { 00:00:32.127 [Pipeline] timeout 00:00:32.127 Timeout set to expire in 30 min 00:00:32.129 [Pipeline] { 00:00:32.144 [Pipeline] stage 00:00:32.146 [Pipeline] { (Tests) 00:00:32.161 [Pipeline] sh 00:00:32.448 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:32.449 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:32.449 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:32.449 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:32.449 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.449 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:32.449 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:32.449 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:32.449 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:32.449 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:32.449 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:32.449 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:32.449 + source /etc/os-release 00:00:32.449 ++ NAME='Fedora Linux' 00:00:32.449 ++ VERSION='39 (Cloud Edition)' 00:00:32.449 ++ ID=fedora 00:00:32.449 ++ VERSION_ID=39 00:00:32.449 ++ VERSION_CODENAME= 00:00:32.449 ++ PLATFORM_ID=platform:f39 00:00:32.449 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:32.449 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.449 ++ LOGO=fedora-logo-icon 00:00:32.449 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:32.449 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.449 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:32.449 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.449 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.449 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.449 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:32.449 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.449 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:32.449 ++ SUPPORT_END=2024-11-12 00:00:32.449 ++ VARIANT='Cloud Edition' 00:00:32.449 ++ VARIANT_ID=cloud 00:00:32.449 + uname -a 00:00:32.449 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:32.449 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:35.746 Hugepages 00:00:35.746 node hugesize free / total 00:00:35.746 node0 1048576kB 0 / 0 00:00:35.746 node0 2048kB 0 / 0 00:00:35.746 node1 1048576kB 0 / 0 00:00:35.746 node1 2048kB 0 / 0 00:00:35.746 00:00:35.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:35.746 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:35.746 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:35.746 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:35.746 + rm -f /tmp/spdk-ld-path 00:00:35.746 + source autorun-spdk.conf 00:00:35.746 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.746 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:35.746 ++ SPDK_TEST_FUZZER=1 00:00:35.746 ++ SPDK_TEST_SETUP=1 00:00:35.746 ++ SPDK_RUN_UBSAN=1 00:00:35.746 ++ RUN_NIGHTLY=1 00:00:35.746 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:35.746 + [[ -n '' ]] 00:00:35.746 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:35.746 + for M in /var/spdk/build-*-manifest.txt 00:00:35.746 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:35.746 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:35.746 + for M in /var/spdk/build-*-manifest.txt 00:00:35.746 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:35.746 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:35.746 + for M in /var/spdk/build-*-manifest.txt 00:00:35.746 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:35.746 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:35.746 ++ uname 00:00:35.746 + [[ Linux == \L\i\n\u\x ]] 00:00:35.746 + sudo dmesg -T 00:00:35.746 + sudo dmesg --clear 00:00:35.746 + dmesg_pid=1350797 00:00:35.746 + [[ Fedora Linux == FreeBSD ]] 00:00:35.746 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.746 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.746 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:35.746 + [[ -x /usr/src/fio-static/fio ]] 00:00:35.746 + export FIO_BIN=/usr/src/fio-static/fio 00:00:35.746 + FIO_BIN=/usr/src/fio-static/fio 00:00:35.746 + sudo dmesg -Tw 00:00:35.746 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:35.746 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:35.746 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:35.746 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.746 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.746 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:35.746 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.746 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.746 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:35.746 Test configuration: 00:00:35.747 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.747 SPDK_TEST_FUZZER_SHORT=1 00:00:35.747 SPDK_TEST_FUZZER=1 00:00:35.747 SPDK_TEST_SETUP=1 00:00:35.747 SPDK_RUN_UBSAN=1 00:00:35.747 RUN_NIGHTLY=1 17:49:57 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:35.747 17:49:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:35.747 17:49:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:35.747 17:49:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:35.747 17:49:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:35.747 17:49:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:35.747 17:49:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.747 17:49:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.747 17:49:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.747 17:49:57 -- paths/export.sh@5 -- $ export PATH 00:00:35.747 17:49:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.747 17:49:57 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:35.747 17:49:57 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:35.747 17:49:57 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728143397.XXXXXX 00:00:35.747 17:49:57 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728143397.Y9jDtU 00:00:35.747 17:49:57 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:35.747 17:49:57 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:35.747 17:49:57 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:35.747 17:49:57 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:35.747 17:49:57 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:35.747 17:49:57 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:35.747 17:49:57 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:35.747 17:49:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.747 17:49:57 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:35.747 17:49:57 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:35.747 17:49:57 -- pm/common@17 -- $ local monitor 00:00:35.747 17:49:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.747 17:49:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.747 17:49:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.747 17:49:57 -- pm/common@21 -- $ date +%s 00:00:35.747 17:49:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.747 17:49:57 -- pm/common@21 -- $ date +%s 00:00:35.747 17:49:57 -- pm/common@21 -- $ date +%s 00:00:35.747 17:49:57 -- pm/common@25 -- $ sleep 1 00:00:35.747 17:49:57 -- pm/common@21 -- $ date +%s 00:00:35.747 17:49:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728143397 00:00:35.747 17:49:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728143397 00:00:35.747 17:49:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728143397 00:00:35.747 17:49:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728143397 00:00:35.747 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728143397_collect-vmstat.pm.log 00:00:35.747 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728143397_collect-cpu-temp.pm.log 00:00:35.747 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728143397_collect-cpu-load.pm.log 00:00:35.747 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728143397_collect-bmc-pm.bmc.pm.log 00:00:36.688 17:49:58 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:36.688 17:49:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.688 17:49:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.688 17:49:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:36.688 17:49:58 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.688 Sat Oct 5 03:49:58 PM UTC 2024 00:00:36.688 17:49:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.688 v25.01-pre-35-g3950cd1bb 00:00:36.688 17:49:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.688 17:49:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.688 17:49:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.688 17:49:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:36.688 17:49:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:36.688 17:49:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.948 ************************************ 00:00:36.948 START TEST ubsan 00:00:36.948 ************************************ 00:00:36.948 17:49:58 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:36.948 using ubsan 00:00:36.948 00:00:36.948 real 0m0.001s 00:00:36.948 user 0m0.000s 00:00:36.948 sys 0m0.001s 00:00:36.948 17:49:58 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:36.948 17:49:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:36.948 ************************************ 00:00:36.948 END TEST ubsan 00:00:36.949 ************************************ 00:00:36.949 17:49:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.949 17:49:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.949 17:49:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.949 17:49:58 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:36.949 17:49:58 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:36.949 17:49:58 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:36.949 17:49:58 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:36.949 17:49:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:36.949 17:49:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.949 ************************************ 00:00:36.949 START TEST autobuild_llvm_precompile 00:00:36.949 ************************************ 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:36.949 Target: x86_64-redhat-linux-gnu 00:00:36.949 Thread model: posix 00:00:36.949 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:36.949 17:49:58 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:37.208 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:37.208 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:37.467 Using 'verbs' RDMA provider 00:00:53.292 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:05.489 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:05.747 Creating mk/config.mk...done. 00:01:05.748 Creating mk/cc.flags.mk...done. 00:01:05.748 Type 'make' to build. 00:01:05.748 00:01:05.748 real 0m28.896s 00:01:05.748 user 0m12.766s 00:01:05.748 sys 0m15.481s 00:01:05.748 17:50:27 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:05.748 17:50:27 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:05.748 ************************************ 00:01:05.748 END TEST autobuild_llvm_precompile 00:01:05.748 ************************************ 00:01:05.748 17:50:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.748 17:50:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.748 17:50:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.748 17:50:27 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:05.748 17:50:27 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:06.006 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:06.006 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:06.573 Using 'verbs' RDMA provider 00:01:19.723 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:29.828 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.828 Creating mk/config.mk...done. 00:01:29.828 Creating mk/cc.flags.mk...done. 00:01:29.828 Type 'make' to build. 00:01:29.828 17:50:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:29.828 17:50:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.828 17:50:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.828 17:50:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.828 ************************************ 00:01:29.828 START TEST make 00:01:29.828 ************************************ 00:01:29.828 17:50:51 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:30.087 make[1]: Nothing to be done for 'all'. 00:01:31.992 The Meson build system 00:01:31.992 Version: 1.5.0 00:01:31.992 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:31.992 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.992 Build type: native build 00:01:31.993 Project name: libvfio-user 00:01:31.993 Project version: 0.0.1 00:01:31.993 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:31.993 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:31.993 Host machine cpu family: x86_64 00:01:31.993 Host machine cpu: x86_64 00:01:31.993 Run-time dependency threads found: YES 00:01:31.993 Library dl found: YES 00:01:31.993 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:31.993 Run-time dependency json-c found: YES 0.17 00:01:31.993 Run-time dependency cmocka found: YES 1.1.7 00:01:31.993 Program pytest-3 found: NO 00:01:31.993 Program flake8 found: NO 00:01:31.993 Program misspell-fixer found: NO 00:01:31.993 Program restructuredtext-lint found: NO 00:01:31.993 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.993 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.993 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.993 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.993 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.993 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.993 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.993 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.993 Build targets in project: 8 00:01:31.993 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.993 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.993 00:01:31.993 libvfio-user 0.0.1 00:01:31.993 00:01:31.993 User defined options 00:01:31.993 buildtype : debug 00:01:31.993 default_library: static 00:01:31.993 libdir : /usr/local/lib 00:01:31.993 00:01:31.993 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.993 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.253 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:32.253 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.253 [3/36] Compiling C object samples/null.p/null.c.o 00:01:32.253 [4/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:32.253 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:32.253 [6/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:32.253 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.253 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:32.253 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:32.253 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:32.253 [11/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:32.253 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:32.253 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:32.253 [14/36] Compiling C object samples/server.p/server.c.o 00:01:32.253 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:32.253 [16/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:32.253 [17/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:32.253 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:32.253 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:32.253 [20/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:32.253 [21/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.253 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:32.253 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:32.253 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:32.253 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:32.253 [26/36] Compiling C object samples/client.p/client.c.o 00:01:32.253 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:32.253 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.253 [29/36] Linking static target lib/libvfio-user.a 00:01:32.253 [30/36] Linking target samples/client 00:01:32.253 [31/36] Linking target samples/server 00:01:32.253 [32/36] Linking target test/unit_tests 00:01:32.253 [33/36] Linking target samples/gpio-pci-idio-16 00:01:32.253 [34/36] Linking target samples/lspci 00:01:32.253 [35/36] Linking target samples/null 00:01:32.253 [36/36] Linking target samples/shadow_ioeventfd_server 00:01:32.253 INFO: autodetecting backend as ninja 00:01:32.253 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.513 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.772 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.772 ninja: no work to do. 00:01:38.050 The Meson build system 00:01:38.050 Version: 1.5.0 00:01:38.050 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:38.050 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:38.050 Build type: native build 00:01:38.050 Program cat found: YES (/usr/bin/cat) 00:01:38.050 Project name: DPDK 00:01:38.050 Project version: 24.03.0 00:01:38.050 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:38.050 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:38.050 Host machine cpu family: x86_64 00:01:38.050 Host machine cpu: x86_64 00:01:38.050 Message: ## Building in Developer Mode ## 00:01:38.050 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.050 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:38.050 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.050 Program python3 found: YES (/usr/bin/python3) 00:01:38.050 Program cat found: YES (/usr/bin/cat) 00:01:38.050 Compiler for C supports arguments -march=native: YES 00:01:38.050 Checking for size of "void *" : 8 00:01:38.050 Checking for size of "void *" : 8 (cached) 00:01:38.050 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:38.050 Library m found: YES 00:01:38.050 Library numa found: YES 00:01:38.050 Has header "numaif.h" : YES 00:01:38.050 Library fdt found: NO 00:01:38.050 Library execinfo found: NO 00:01:38.050 Has header "execinfo.h" : YES 00:01:38.050 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.050 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.050 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.050 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.050 Run-time dependency openssl found: YES 3.1.1 00:01:38.050 Run-time dependency libpcap found: YES 1.10.4 00:01:38.050 Has header "pcap.h" with dependency libpcap: YES 00:01:38.050 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.050 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.050 Compiler for C supports arguments -Wformat: YES 00:01:38.050 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:38.050 Compiler for C supports arguments -Wformat-security: YES 00:01:38.050 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.050 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.050 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.050 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.050 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.050 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.050 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.050 Compiler for C supports arguments -Wundef: YES 00:01:38.050 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.050 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.050 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:38.050 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.050 Program objdump found: YES (/usr/bin/objdump) 00:01:38.050 Compiler for C supports arguments -mavx512f: YES 00:01:38.050 Checking if "AVX512 checking" compiles: YES 00:01:38.050 Fetching value of define "__SSE4_2__" : 1 00:01:38.050 Fetching value of define "__AES__" : 1 00:01:38.050 Fetching value of define "__AVX__" : 1 00:01:38.050 Fetching value of define "__AVX2__" : 1 00:01:38.050 Fetching value of define "__AVX512BW__" : 1 00:01:38.050 Fetching value of define "__AVX512CD__" : 1 00:01:38.050 Fetching value of define "__AVX512DQ__" : 1 00:01:38.050 Fetching value of define "__AVX512F__" : 1 00:01:38.050 Fetching value of define "__AVX512VL__" : 1 00:01:38.050 Fetching value of define "__PCLMUL__" : 1 00:01:38.050 Fetching value of define "__RDRND__" : 1 00:01:38.050 Fetching value of define "__RDSEED__" : 1 00:01:38.050 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:38.050 Fetching value of define "__znver1__" : (undefined) 00:01:38.050 Fetching value of define "__znver2__" : (undefined) 00:01:38.050 Fetching value of define "__znver3__" : (undefined) 00:01:38.050 Fetching value of define "__znver4__" : (undefined) 00:01:38.050 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:38.050 Message: lib/log: Defining dependency "log" 00:01:38.050 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.050 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.050 Checking for function "getentropy" : NO 00:01:38.050 Message: lib/eal: Defining dependency "eal" 00:01:38.050 Message: lib/ring: Defining dependency "ring" 00:01:38.050 Message: lib/rcu: Defining dependency "rcu" 00:01:38.050 Message: lib/mempool: Defining dependency "mempool" 00:01:38.050 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.050 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.050 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.050 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.051 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.051 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.051 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:38.051 Compiler for C supports arguments -mpclmul: YES 00:01:38.051 Compiler for C supports arguments -maes: YES 00:01:38.051 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.051 Compiler for C supports arguments -mavx512bw: YES 00:01:38.051 Compiler for C supports arguments -mavx512dq: YES 00:01:38.051 Compiler for C supports arguments -mavx512vl: YES 00:01:38.051 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.051 Compiler for C supports arguments -mavx2: YES 00:01:38.051 Compiler for C supports arguments -mavx: YES 00:01:38.051 Message: lib/net: Defining dependency "net" 00:01:38.051 Message: lib/meter: Defining dependency "meter" 00:01:38.051 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.051 Message: lib/pci: Defining dependency "pci" 00:01:38.051 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.051 Message: lib/hash: Defining dependency "hash" 00:01:38.051 Message: lib/timer: Defining dependency "timer" 00:01:38.051 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.051 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.051 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.051 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.051 Message: lib/power: Defining dependency "power" 00:01:38.051 Message: lib/reorder: Defining dependency "reorder" 00:01:38.051 Message: lib/security: Defining dependency "security" 00:01:38.051 Has header "linux/userfaultfd.h" : YES 00:01:38.051 Has header "linux/vduse.h" : YES 00:01:38.051 Message: lib/vhost: Defining dependency "vhost" 00:01:38.051 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:38.051 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:38.051 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:38.051 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.051 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:38.051 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:38.051 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:38.051 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:38.051 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:38.051 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:38.051 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:38.051 Configuring doxy-api-html.conf using configuration 00:01:38.051 Configuring doxy-api-man.conf using configuration 00:01:38.051 Program mandb found: YES (/usr/bin/mandb) 00:01:38.051 Program sphinx-build found: NO 00:01:38.051 Configuring rte_build_config.h using configuration 00:01:38.051 Message: 00:01:38.051 ================= 00:01:38.051 Applications Enabled 00:01:38.051 ================= 00:01:38.051 00:01:38.051 apps: 00:01:38.051 00:01:38.051 00:01:38.051 Message: 00:01:38.051 ================= 00:01:38.051 Libraries Enabled 00:01:38.051 ================= 00:01:38.051 00:01:38.051 libs: 00:01:38.051 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:38.051 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:38.051 cryptodev, dmadev, power, reorder, security, vhost, 00:01:38.051 00:01:38.051 Message: 00:01:38.051 =============== 00:01:38.051 Drivers Enabled 00:01:38.051 =============== 00:01:38.051 00:01:38.051 common: 00:01:38.051 00:01:38.051 bus: 00:01:38.051 pci, vdev, 00:01:38.051 mempool: 00:01:38.051 ring, 00:01:38.051 dma: 00:01:38.051 00:01:38.051 net: 00:01:38.051 00:01:38.051 crypto: 00:01:38.051 00:01:38.051 compress: 00:01:38.051 00:01:38.051 vdpa: 00:01:38.051 00:01:38.051 00:01:38.051 Message: 00:01:38.051 ================= 00:01:38.051 Content Skipped 00:01:38.051 ================= 00:01:38.051 00:01:38.051 apps: 00:01:38.051 dumpcap: explicitly disabled via build config 00:01:38.051 graph: explicitly disabled via build config 00:01:38.051 pdump: explicitly disabled via build config 00:01:38.051 proc-info: explicitly disabled via build config 00:01:38.051 test-acl: explicitly disabled via build config 00:01:38.051 test-bbdev: explicitly disabled via build config 00:01:38.051 test-cmdline: explicitly disabled via build config 00:01:38.051 test-compress-perf: explicitly disabled via build config 00:01:38.051 test-crypto-perf: explicitly disabled via build config 00:01:38.051 test-dma-perf: explicitly disabled via build config 00:01:38.051 test-eventdev: explicitly disabled via build config 00:01:38.051 test-fib: explicitly disabled via build config 00:01:38.051 test-flow-perf: explicitly disabled via build config 00:01:38.051 test-gpudev: explicitly disabled via build config 00:01:38.051 test-mldev: explicitly disabled via build config 00:01:38.051 test-pipeline: explicitly disabled via build config 00:01:38.051 test-pmd: explicitly disabled via build config 00:01:38.051 test-regex: explicitly disabled via build config 00:01:38.051 test-sad: explicitly disabled via build config 00:01:38.051 test-security-perf: explicitly disabled via build config 00:01:38.051 00:01:38.051 libs: 00:01:38.051 argparse: explicitly disabled via build config 00:01:38.051 metrics: explicitly disabled via build config 00:01:38.051 acl: explicitly disabled via build config 00:01:38.051 bbdev: explicitly disabled via build config 00:01:38.051 bitratestats: explicitly disabled via build config 00:01:38.051 bpf: explicitly disabled via build config 00:01:38.051 cfgfile: explicitly disabled via build config 00:01:38.051 distributor: explicitly disabled via build config 00:01:38.051 efd: explicitly disabled via build config 00:01:38.051 eventdev: explicitly disabled via build config 00:01:38.051 dispatcher: explicitly disabled via build config 00:01:38.051 gpudev: explicitly disabled via build config 00:01:38.051 gro: explicitly disabled via build config 00:01:38.051 gso: explicitly disabled via build config 00:01:38.051 ip_frag: explicitly disabled via build config 00:01:38.051 jobstats: explicitly disabled via build config 00:01:38.051 latencystats: explicitly disabled via build config 00:01:38.051 lpm: explicitly disabled via build config 00:01:38.051 member: explicitly disabled via build config 00:01:38.051 pcapng: explicitly disabled via build config 00:01:38.051 rawdev: explicitly disabled via build config 00:01:38.051 regexdev: explicitly disabled via build config 00:01:38.051 mldev: explicitly disabled via build config 00:01:38.051 rib: explicitly disabled via build config 00:01:38.051 sched: explicitly disabled via build config 00:01:38.051 stack: explicitly disabled via build config 00:01:38.051 ipsec: explicitly disabled via build config 00:01:38.051 pdcp: explicitly disabled via build config 00:01:38.051 fib: explicitly disabled via build config 00:01:38.051 port: explicitly disabled via build config 00:01:38.051 pdump: explicitly disabled via build config 00:01:38.051 table: explicitly disabled via build config 00:01:38.051 pipeline: explicitly disabled via build config 00:01:38.051 graph: explicitly disabled via build config 00:01:38.051 node: explicitly disabled via build config 00:01:38.051 00:01:38.051 drivers: 00:01:38.051 common/cpt: not in enabled drivers build config 00:01:38.051 common/dpaax: not in enabled drivers build config 00:01:38.051 common/iavf: not in enabled drivers build config 00:01:38.051 common/idpf: not in enabled drivers build config 00:01:38.051 common/ionic: not in enabled drivers build config 00:01:38.051 common/mvep: not in enabled drivers build config 00:01:38.051 common/octeontx: not in enabled drivers build config 00:01:38.051 bus/auxiliary: not in enabled drivers build config 00:01:38.051 bus/cdx: not in enabled drivers build config 00:01:38.051 bus/dpaa: not in enabled drivers build config 00:01:38.051 bus/fslmc: not in enabled drivers build config 00:01:38.051 bus/ifpga: not in enabled drivers build config 00:01:38.051 bus/platform: not in enabled drivers build config 00:01:38.051 bus/uacce: not in enabled drivers build config 00:01:38.051 bus/vmbus: not in enabled drivers build config 00:01:38.051 common/cnxk: not in enabled drivers build config 00:01:38.051 common/mlx5: not in enabled drivers build config 00:01:38.051 common/nfp: not in enabled drivers build config 00:01:38.051 common/nitrox: not in enabled drivers build config 00:01:38.051 common/qat: not in enabled drivers build config 00:01:38.051 common/sfc_efx: not in enabled drivers build config 00:01:38.051 mempool/bucket: not in enabled drivers build config 00:01:38.051 mempool/cnxk: not in enabled drivers build config 00:01:38.051 mempool/dpaa: not in enabled drivers build config 00:01:38.051 mempool/dpaa2: not in enabled drivers build config 00:01:38.051 mempool/octeontx: not in enabled drivers build config 00:01:38.051 mempool/stack: not in enabled drivers build config 00:01:38.051 dma/cnxk: not in enabled drivers build config 00:01:38.051 dma/dpaa: not in enabled drivers build config 00:01:38.051 dma/dpaa2: not in enabled drivers build config 00:01:38.051 dma/hisilicon: not in enabled drivers build config 00:01:38.051 dma/idxd: not in enabled drivers build config 00:01:38.051 dma/ioat: not in enabled drivers build config 00:01:38.051 dma/skeleton: not in enabled drivers build config 00:01:38.051 net/af_packet: not in enabled drivers build config 00:01:38.051 net/af_xdp: not in enabled drivers build config 00:01:38.051 net/ark: not in enabled drivers build config 00:01:38.051 net/atlantic: not in enabled drivers build config 00:01:38.051 net/avp: not in enabled drivers build config 00:01:38.051 net/axgbe: not in enabled drivers build config 00:01:38.051 net/bnx2x: not in enabled drivers build config 00:01:38.051 net/bnxt: not in enabled drivers build config 00:01:38.051 net/bonding: not in enabled drivers build config 00:01:38.051 net/cnxk: not in enabled drivers build config 00:01:38.051 net/cpfl: not in enabled drivers build config 00:01:38.051 net/cxgbe: not in enabled drivers build config 00:01:38.051 net/dpaa: not in enabled drivers build config 00:01:38.051 net/dpaa2: not in enabled drivers build config 00:01:38.051 net/e1000: not in enabled drivers build config 00:01:38.051 net/ena: not in enabled drivers build config 00:01:38.051 net/enetc: not in enabled drivers build config 00:01:38.051 net/enetfec: not in enabled drivers build config 00:01:38.051 net/enic: not in enabled drivers build config 00:01:38.051 net/failsafe: not in enabled drivers build config 00:01:38.051 net/fm10k: not in enabled drivers build config 00:01:38.051 net/gve: not in enabled drivers build config 00:01:38.052 net/hinic: not in enabled drivers build config 00:01:38.052 net/hns3: not in enabled drivers build config 00:01:38.052 net/i40e: not in enabled drivers build config 00:01:38.052 net/iavf: not in enabled drivers build config 00:01:38.052 net/ice: not in enabled drivers build config 00:01:38.052 net/idpf: not in enabled drivers build config 00:01:38.052 net/igc: not in enabled drivers build config 00:01:38.052 net/ionic: not in enabled drivers build config 00:01:38.052 net/ipn3ke: not in enabled drivers build config 00:01:38.052 net/ixgbe: not in enabled drivers build config 00:01:38.052 net/mana: not in enabled drivers build config 00:01:38.052 net/memif: not in enabled drivers build config 00:01:38.052 net/mlx4: not in enabled drivers build config 00:01:38.052 net/mlx5: not in enabled drivers build config 00:01:38.052 net/mvneta: not in enabled drivers build config 00:01:38.052 net/mvpp2: not in enabled drivers build config 00:01:38.052 net/netvsc: not in enabled drivers build config 00:01:38.052 net/nfb: not in enabled drivers build config 00:01:38.052 net/nfp: not in enabled drivers build config 00:01:38.052 net/ngbe: not in enabled drivers build config 00:01:38.052 net/null: not in enabled drivers build config 00:01:38.052 net/octeontx: not in enabled drivers build config 00:01:38.052 net/octeon_ep: not in enabled drivers build config 00:01:38.052 net/pcap: not in enabled drivers build config 00:01:38.052 net/pfe: not in enabled drivers build config 00:01:38.052 net/qede: not in enabled drivers build config 00:01:38.052 net/ring: not in enabled drivers build config 00:01:38.052 net/sfc: not in enabled drivers build config 00:01:38.052 net/softnic: not in enabled drivers build config 00:01:38.052 net/tap: not in enabled drivers build config 00:01:38.052 net/thunderx: not in enabled drivers build config 00:01:38.052 net/txgbe: not in enabled drivers build config 00:01:38.052 net/vdev_netvsc: not in enabled drivers build config 00:01:38.052 net/vhost: not in enabled drivers build config 00:01:38.052 net/virtio: not in enabled drivers build config 00:01:38.052 net/vmxnet3: not in enabled drivers build config 00:01:38.052 raw/*: missing internal dependency, "rawdev" 00:01:38.052 crypto/armv8: not in enabled drivers build config 00:01:38.052 crypto/bcmfs: not in enabled drivers build config 00:01:38.052 crypto/caam_jr: not in enabled drivers build config 00:01:38.052 crypto/ccp: not in enabled drivers build config 00:01:38.052 crypto/cnxk: not in enabled drivers build config 00:01:38.052 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.052 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.052 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.052 crypto/mlx5: not in enabled drivers build config 00:01:38.052 crypto/mvsam: not in enabled drivers build config 00:01:38.052 crypto/nitrox: not in enabled drivers build config 00:01:38.052 crypto/null: not in enabled drivers build config 00:01:38.052 crypto/octeontx: not in enabled drivers build config 00:01:38.052 crypto/openssl: not in enabled drivers build config 00:01:38.052 crypto/scheduler: not in enabled drivers build config 00:01:38.052 crypto/uadk: not in enabled drivers build config 00:01:38.052 crypto/virtio: not in enabled drivers build config 00:01:38.052 compress/isal: not in enabled drivers build config 00:01:38.052 compress/mlx5: not in enabled drivers build config 00:01:38.052 compress/nitrox: not in enabled drivers build config 00:01:38.052 compress/octeontx: not in enabled drivers build config 00:01:38.052 compress/zlib: not in enabled drivers build config 00:01:38.052 regex/*: missing internal dependency, "regexdev" 00:01:38.052 ml/*: missing internal dependency, "mldev" 00:01:38.052 vdpa/ifc: not in enabled drivers build config 00:01:38.052 vdpa/mlx5: not in enabled drivers build config 00:01:38.052 vdpa/nfp: not in enabled drivers build config 00:01:38.052 vdpa/sfc: not in enabled drivers build config 00:01:38.052 event/*: missing internal dependency, "eventdev" 00:01:38.052 baseband/*: missing internal dependency, "bbdev" 00:01:38.052 gpu/*: missing internal dependency, "gpudev" 00:01:38.052 00:01:38.052 00:01:38.312 Build targets in project: 85 00:01:38.312 00:01:38.312 DPDK 24.03.0 00:01:38.312 00:01:38.312 User defined options 00:01:38.312 buildtype : debug 00:01:38.312 default_library : static 00:01:38.312 libdir : lib 00:01:38.312 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:38.312 c_args : -fPIC -Werror 00:01:38.312 c_link_args : 00:01:38.312 cpu_instruction_set: native 00:01:38.312 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:38.312 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:38.312 enable_docs : false 00:01:38.312 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:38.312 enable_kmods : false 00:01:38.312 max_lcores : 128 00:01:38.312 tests : false 00:01:38.312 00:01:38.312 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.889 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:38.889 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.889 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:38.889 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.889 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.889 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.889 [6/268] Linking static target lib/librte_kvargs.a 00:01:38.889 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.889 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.889 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.889 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.889 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.889 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.889 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.889 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.889 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.889 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.889 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.889 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.889 [19/268] Linking static target lib/librte_log.a 00:01:38.889 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.889 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.889 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.889 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.889 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.889 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.889 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.889 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.889 [28/268] Linking static target lib/librte_pci.a 00:01:38.889 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.889 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.889 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.889 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.147 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.147 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.147 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.406 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.406 [37/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.406 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.406 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.406 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.406 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.406 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.406 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.406 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.406 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.406 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.406 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.406 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.406 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.406 [50/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.406 [51/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.406 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.406 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.406 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.406 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.406 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.406 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.406 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.406 [59/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.406 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.406 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.406 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.406 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.406 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.406 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.406 [66/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.406 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.406 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.406 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.406 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.406 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.406 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.406 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.406 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.406 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.406 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.406 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.406 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.406 [79/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.406 [80/268] Linking static target lib/librte_telemetry.a 00:01:39.406 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.406 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.406 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.406 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.406 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.406 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.406 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.406 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.406 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.406 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.406 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.406 [92/268] Linking static target lib/librte_meter.a 00:01:39.406 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.406 [94/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.406 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.406 [96/268] Linking static target lib/librte_ring.a 00:01:39.406 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.406 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.406 [99/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.406 [100/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.406 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.406 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.406 [103/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.406 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.406 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.406 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.406 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.406 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.406 [109/268] Linking static target lib/librte_timer.a 00:01:39.406 [110/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.406 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.406 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.406 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.406 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.406 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.406 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.406 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.406 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.406 [119/268] Linking static target lib/librte_cmdline.a 00:01:39.406 [120/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.406 [121/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.406 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.406 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.406 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.406 [125/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.406 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.406 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.406 [128/268] Linking static target lib/librte_eal.a 00:01:39.406 [129/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.406 [130/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.406 [131/268] Linking static target lib/librte_mempool.a 00:01:39.406 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.406 [133/268] Linking static target lib/librte_net.a 00:01:39.665 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.665 [135/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.665 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.665 [137/268] Linking static target lib/librte_rcu.a 00:01:39.665 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.665 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.665 [140/268] Linking static target lib/librte_mbuf.a 00:01:39.665 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.665 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.665 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.665 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.665 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.665 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.665 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.665 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.665 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.665 [150/268] Linking static target lib/librte_dmadev.a 00:01:39.666 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.666 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.666 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.666 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.666 [155/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.666 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.666 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.666 [158/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.666 [159/268] Linking static target lib/librte_hash.a 00:01:39.666 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.666 [161/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.666 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.666 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.666 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.666 [165/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.666 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.666 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.666 [168/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.666 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.666 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.666 [171/268] Linking static target lib/librte_compressdev.a 00:01:39.925 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.925 [173/268] Linking static target lib/librte_reorder.a 00:01:39.925 [174/268] Linking target lib/librte_log.so.24.1 00:01:39.925 [175/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.925 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.925 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.925 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.925 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.925 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.925 [181/268] Linking static target lib/librte_security.a 00:01:39.925 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.925 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.925 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.925 [185/268] Linking static target lib/librte_cryptodev.a 00:01:39.925 [186/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.925 [187/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.925 [188/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.925 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.925 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.925 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.925 [192/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.925 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.925 [194/268] Linking target lib/librte_kvargs.so.24.1 00:01:39.925 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.925 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.925 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.925 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.925 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:39.925 [200/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.925 [201/268] Linking target lib/librte_telemetry.so.24.1 00:01:39.925 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.184 [203/268] Linking static target lib/librte_power.a 00:01:40.184 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.184 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.184 [206/268] Linking static target drivers/librte_bus_pci.a 00:01:40.184 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.184 [208/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.184 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.184 [210/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.184 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.184 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.184 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.184 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:40.184 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:40.184 [216/268] Linking static target lib/librte_ethdev.a 00:01:40.184 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.443 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.702 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.702 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.961 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.961 [227/268] Linking static target lib/librte_vhost.a 00:01:40.961 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.961 [229/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.341 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.910 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.036 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.605 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.864 [234/268] Linking target lib/librte_eal.so.24.1 00:01:51.864 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.864 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:51.864 [237/268] Linking target lib/librte_ring.so.24.1 00:01:51.864 [238/268] Linking target lib/librte_meter.so.24.1 00:01:51.865 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.865 [240/268] Linking target lib/librte_pci.so.24.1 00:01:51.865 [241/268] Linking target lib/librte_timer.so.24.1 00:01:52.124 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.124 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.124 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.124 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.124 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.124 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:52.124 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.124 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:52.124 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.384 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.384 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:52.384 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.384 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.643 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:52.643 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:52.643 [257/268] Linking target lib/librte_net.so.24.1 00:01:52.643 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:52.643 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.643 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.643 [261/268] Linking target lib/librte_security.so.24.1 00:01:52.643 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:52.643 [263/268] Linking target lib/librte_hash.so.24.1 00:01:52.643 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:52.903 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.903 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:52.903 [267/268] Linking target lib/librte_power.so.24.1 00:01:52.903 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:52.903 INFO: autodetecting backend as ninja 00:01:52.903 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:53.840 CC lib/log/log.o 00:01:53.840 CC lib/ut_mock/mock.o 00:01:53.840 CC lib/log/log_flags.o 00:01:53.840 CC lib/log/log_deprecated.o 00:01:53.840 CC lib/ut/ut.o 00:01:54.099 LIB libspdk_ut_mock.a 00:01:54.099 LIB libspdk_log.a 00:01:54.099 LIB libspdk_ut.a 00:01:54.357 CC lib/util/bit_array.o 00:01:54.357 CC lib/util/base64.o 00:01:54.357 CC lib/util/cpuset.o 00:01:54.357 CC lib/util/crc16.o 00:01:54.357 CC lib/util/crc32.o 00:01:54.357 CC lib/util/crc32c.o 00:01:54.357 CC lib/util/crc32_ieee.o 00:01:54.357 CC lib/util/crc64.o 00:01:54.357 CC lib/util/dif.o 00:01:54.357 CC lib/util/file.o 00:01:54.357 CC lib/util/fd.o 00:01:54.357 CC lib/util/fd_group.o 00:01:54.357 CC lib/util/hexlify.o 00:01:54.357 CC lib/util/iov.o 00:01:54.357 CC lib/util/math.o 00:01:54.357 CC lib/util/pipe.o 00:01:54.357 CC lib/util/net.o 00:01:54.357 CC lib/util/strerror_tls.o 00:01:54.357 CC lib/util/string.o 00:01:54.357 CC lib/util/xor.o 00:01:54.357 CC lib/util/uuid.o 00:01:54.357 CC lib/util/zipf.o 00:01:54.357 CC lib/util/md5.o 00:01:54.357 CC lib/dma/dma.o 00:01:54.357 CC lib/ioat/ioat.o 00:01:54.357 CXX lib/trace_parser/trace.o 00:01:54.617 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.617 CC lib/vfio_user/host/vfio_user.o 00:01:54.617 LIB libspdk_dma.a 00:01:54.617 LIB libspdk_ioat.a 00:01:54.617 LIB libspdk_util.a 00:01:54.617 LIB libspdk_vfio_user.a 00:01:54.876 LIB libspdk_trace_parser.a 00:01:54.876 CC lib/idxd/idxd_user.o 00:01:54.876 CC lib/idxd/idxd.o 00:01:54.876 CC lib/idxd/idxd_kernel.o 00:01:54.876 CC lib/json/json_parse.o 00:01:54.876 CC lib/json/json_write.o 00:01:54.876 CC lib/json/json_util.o 00:01:54.876 CC lib/rdma_provider/common.o 00:01:54.876 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:54.876 CC lib/rdma_utils/rdma_utils.o 00:01:54.876 CC lib/conf/conf.o 00:01:54.876 CC lib/vmd/vmd.o 00:01:54.876 CC lib/vmd/led.o 00:01:54.876 CC lib/env_dpdk/env.o 00:01:54.876 CC lib/env_dpdk/memory.o 00:01:54.876 CC lib/env_dpdk/pci.o 00:01:54.876 CC lib/env_dpdk/init.o 00:01:54.876 CC lib/env_dpdk/threads.o 00:01:54.876 CC lib/env_dpdk/pci_ioat.o 00:01:54.876 CC lib/env_dpdk/pci_virtio.o 00:01:54.876 CC lib/env_dpdk/pci_vmd.o 00:01:54.876 CC lib/env_dpdk/pci_idxd.o 00:01:54.876 CC lib/env_dpdk/pci_event.o 00:01:54.876 CC lib/env_dpdk/sigbus_handler.o 00:01:54.876 CC lib/env_dpdk/pci_dpdk.o 00:01:54.876 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.876 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.135 LIB libspdk_rdma_provider.a 00:01:55.135 LIB libspdk_conf.a 00:01:55.135 LIB libspdk_json.a 00:01:55.135 LIB libspdk_rdma_utils.a 00:01:55.135 LIB libspdk_idxd.a 00:01:55.394 LIB libspdk_vmd.a 00:01:55.394 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.394 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.394 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.394 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.653 LIB libspdk_jsonrpc.a 00:01:55.912 LIB libspdk_env_dpdk.a 00:01:55.912 CC lib/rpc/rpc.o 00:01:56.172 LIB libspdk_rpc.a 00:01:56.431 CC lib/notify/notify.o 00:01:56.431 CC lib/notify/notify_rpc.o 00:01:56.431 CC lib/keyring/keyring.o 00:01:56.431 CC lib/trace/trace.o 00:01:56.431 CC lib/keyring/keyring_rpc.o 00:01:56.431 CC lib/trace/trace_flags.o 00:01:56.431 CC lib/trace/trace_rpc.o 00:01:56.431 LIB libspdk_notify.a 00:01:56.431 LIB libspdk_keyring.a 00:01:56.431 LIB libspdk_trace.a 00:01:56.691 CC lib/thread/thread.o 00:01:56.691 CC lib/thread/iobuf.o 00:01:56.950 CC lib/sock/sock.o 00:01:56.950 CC lib/sock/sock_rpc.o 00:01:57.208 LIB libspdk_sock.a 00:01:57.467 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.467 CC lib/nvme/nvme_ns_cmd.o 00:01:57.467 CC lib/nvme/nvme_ctrlr.o 00:01:57.467 CC lib/nvme/nvme_fabric.o 00:01:57.467 CC lib/nvme/nvme_pcie_common.o 00:01:57.467 CC lib/nvme/nvme_ns.o 00:01:57.467 CC lib/nvme/nvme.o 00:01:57.467 CC lib/nvme/nvme_pcie.o 00:01:57.467 CC lib/nvme/nvme_qpair.o 00:01:57.467 CC lib/nvme/nvme_discovery.o 00:01:57.467 CC lib/nvme/nvme_quirks.o 00:01:57.467 CC lib/nvme/nvme_transport.o 00:01:57.467 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.467 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.467 CC lib/nvme/nvme_tcp.o 00:01:57.467 CC lib/nvme/nvme_opal.o 00:01:57.467 CC lib/nvme/nvme_io_msg.o 00:01:57.467 CC lib/nvme/nvme_poll_group.o 00:01:57.467 CC lib/nvme/nvme_zns.o 00:01:57.467 CC lib/nvme/nvme_stubs.o 00:01:57.467 CC lib/nvme/nvme_auth.o 00:01:57.467 CC lib/nvme/nvme_cuse.o 00:01:57.467 CC lib/nvme/nvme_rdma.o 00:01:57.467 CC lib/nvme/nvme_vfio_user.o 00:01:57.467 LIB libspdk_thread.a 00:01:57.726 CC lib/fsdev/fsdev_io.o 00:01:57.726 CC lib/fsdev/fsdev.o 00:01:57.726 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.726 CC lib/fsdev/fsdev_rpc.o 00:01:57.726 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.726 CC lib/virtio/virtio.o 00:01:57.726 CC lib/accel/accel.o 00:01:57.726 CC lib/virtio/virtio_vhost_user.o 00:01:57.726 CC lib/accel/accel_rpc.o 00:01:57.726 CC lib/init/json_config.o 00:01:57.726 CC lib/virtio/virtio_vfio_user.o 00:01:57.726 CC lib/blob/blobstore.o 00:01:57.726 CC lib/accel/accel_sw.o 00:01:57.726 CC lib/virtio/virtio_pci.o 00:01:57.726 CC lib/init/subsystem.o 00:01:57.726 CC lib/blob/request.o 00:01:57.726 CC lib/init/subsystem_rpc.o 00:01:57.726 CC lib/blob/zeroes.o 00:01:57.726 CC lib/init/rpc.o 00:01:57.726 CC lib/blob/blob_bs_dev.o 00:01:57.985 LIB libspdk_init.a 00:01:57.985 LIB libspdk_virtio.a 00:01:57.985 LIB libspdk_vfu_tgt.a 00:01:58.244 LIB libspdk_fsdev.a 00:01:58.244 CC lib/event/app.o 00:01:58.244 CC lib/event/reactor.o 00:01:58.244 CC lib/event/log_rpc.o 00:01:58.244 CC lib/event/app_rpc.o 00:01:58.244 CC lib/event/scheduler_static.o 00:01:58.503 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:58.503 LIB libspdk_event.a 00:01:58.503 LIB libspdk_accel.a 00:01:58.763 LIB libspdk_nvme.a 00:01:58.763 CC lib/bdev/bdev.o 00:01:58.763 CC lib/bdev/bdev_rpc.o 00:01:58.763 CC lib/bdev/scsi_nvme.o 00:01:58.763 CC lib/bdev/bdev_zone.o 00:01:58.763 CC lib/bdev/part.o 00:01:59.022 LIB libspdk_fuse_dispatcher.a 00:01:59.590 LIB libspdk_blob.a 00:01:59.850 CC lib/blobfs/blobfs.o 00:01:59.850 CC lib/blobfs/tree.o 00:01:59.850 CC lib/lvol/lvol.o 00:02:00.419 LIB libspdk_blobfs.a 00:02:00.419 LIB libspdk_lvol.a 00:02:00.419 LIB libspdk_bdev.a 00:02:00.986 CC lib/nvmf/ctrlr.o 00:02:00.986 CC lib/nvmf/ctrlr_discovery.o 00:02:00.986 CC lib/nvmf/ctrlr_bdev.o 00:02:00.986 CC lib/scsi/dev.o 00:02:00.986 CC lib/nvmf/subsystem.o 00:02:00.986 CC lib/scsi/lun.o 00:02:00.986 CC lib/nvmf/nvmf.o 00:02:00.986 CC lib/scsi/port.o 00:02:00.986 CC lib/nvmf/tcp.o 00:02:00.986 CC lib/nvmf/nvmf_rpc.o 00:02:00.986 CC lib/scsi/scsi.o 00:02:00.986 CC lib/nvmf/transport.o 00:02:00.986 CC lib/nvmf/mdns_server.o 00:02:00.986 CC lib/scsi/scsi_bdev.o 00:02:00.986 CC lib/scsi/scsi_pr.o 00:02:00.986 CC lib/nvmf/stubs.o 00:02:00.986 CC lib/scsi/scsi_rpc.o 00:02:00.986 CC lib/nvmf/vfio_user.o 00:02:00.986 CC lib/scsi/task.o 00:02:00.986 CC lib/nvmf/rdma.o 00:02:00.986 CC lib/nvmf/auth.o 00:02:00.986 CC lib/nbd/nbd_rpc.o 00:02:00.986 CC lib/nbd/nbd.o 00:02:00.986 CC lib/ftl/ftl_core.o 00:02:00.986 CC lib/ftl/ftl_debug.o 00:02:00.986 CC lib/ftl/ftl_init.o 00:02:00.986 CC lib/ublk/ublk.o 00:02:00.986 CC lib/ftl/ftl_layout.o 00:02:00.986 CC lib/ublk/ublk_rpc.o 00:02:00.986 CC lib/ftl/ftl_io.o 00:02:00.986 CC lib/ftl/ftl_sb.o 00:02:00.986 CC lib/ftl/ftl_l2p.o 00:02:00.986 CC lib/ftl/ftl_l2p_flat.o 00:02:00.986 CC lib/ftl/ftl_nv_cache.o 00:02:00.986 CC lib/ftl/ftl_band.o 00:02:00.986 CC lib/ftl/ftl_band_ops.o 00:02:00.986 CC lib/ftl/ftl_writer.o 00:02:00.986 CC lib/ftl/ftl_rq.o 00:02:00.986 CC lib/ftl/ftl_reloc.o 00:02:00.986 CC lib/ftl/ftl_l2p_cache.o 00:02:00.986 CC lib/ftl/ftl_p2l.o 00:02:00.986 CC lib/ftl/ftl_p2l_log.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:00.986 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:00.986 CC lib/ftl/utils/ftl_md.o 00:02:00.986 CC lib/ftl/utils/ftl_conf.o 00:02:00.986 CC lib/ftl/utils/ftl_mempool.o 00:02:00.986 CC lib/ftl/utils/ftl_bitmap.o 00:02:00.986 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:00.986 CC lib/ftl/utils/ftl_property.o 00:02:00.986 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:00.986 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:00.986 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:00.986 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:00.986 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:00.986 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:00.986 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:00.986 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.986 CC lib/ftl/ftl_trace.o 00:02:00.986 CC lib/ftl/base/ftl_base_dev.o 00:02:01.244 LIB libspdk_nbd.a 00:02:01.244 LIB libspdk_scsi.a 00:02:01.504 LIB libspdk_ublk.a 00:02:01.504 CC lib/vhost/vhost_rpc.o 00:02:01.504 CC lib/vhost/vhost.o 00:02:01.504 CC lib/vhost/vhost_blk.o 00:02:01.504 CC lib/vhost/vhost_scsi.o 00:02:01.504 CC lib/vhost/rte_vhost_user.o 00:02:01.504 CC lib/iscsi/init_grp.o 00:02:01.504 CC lib/iscsi/conn.o 00:02:01.504 CC lib/iscsi/iscsi.o 00:02:01.504 CC lib/iscsi/param.o 00:02:01.504 CC lib/iscsi/portal_grp.o 00:02:01.504 CC lib/iscsi/tgt_node.o 00:02:01.504 CC lib/iscsi/iscsi_subsystem.o 00:02:01.504 CC lib/iscsi/iscsi_rpc.o 00:02:01.504 CC lib/iscsi/task.o 00:02:01.504 LIB libspdk_ftl.a 00:02:02.070 LIB libspdk_nvmf.a 00:02:02.070 LIB libspdk_vhost.a 00:02:02.328 LIB libspdk_iscsi.a 00:02:02.586 CC module/vfu_device/vfu_virtio.o 00:02:02.586 CC module/vfu_device/vfu_virtio_blk.o 00:02:02.586 CC module/vfu_device/vfu_virtio_fs.o 00:02:02.586 CC module/vfu_device/vfu_virtio_scsi.o 00:02:02.586 CC module/vfu_device/vfu_virtio_rpc.o 00:02:02.586 CC module/env_dpdk/env_dpdk_rpc.o 00:02:02.844 CC module/accel/ioat/accel_ioat_rpc.o 00:02:02.844 CC module/accel/ioat/accel_ioat.o 00:02:02.844 CC module/accel/iaa/accel_iaa.o 00:02:02.844 CC module/accel/dsa/accel_dsa.o 00:02:02.844 CC module/accel/iaa/accel_iaa_rpc.o 00:02:02.844 CC module/fsdev/aio/fsdev_aio.o 00:02:02.844 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.844 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:02.844 CC module/fsdev/aio/linux_aio_mgr.o 00:02:02.844 CC module/accel/error/accel_error_rpc.o 00:02:02.844 CC module/accel/error/accel_error.o 00:02:02.844 CC module/blob/bdev/blob_bdev.o 00:02:02.844 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:02.844 CC module/sock/posix/posix.o 00:02:02.844 LIB libspdk_env_dpdk_rpc.a 00:02:02.844 CC module/keyring/linux/keyring_rpc.o 00:02:02.844 CC module/keyring/linux/keyring.o 00:02:02.844 CC module/scheduler/gscheduler/gscheduler.o 00:02:02.844 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:02.844 CC module/keyring/file/keyring.o 00:02:02.844 CC module/keyring/file/keyring_rpc.o 00:02:02.844 LIB libspdk_accel_ioat.a 00:02:02.844 LIB libspdk_scheduler_gscheduler.a 00:02:02.844 LIB libspdk_scheduler_dynamic.a 00:02:02.844 LIB libspdk_accel_error.a 00:02:02.844 LIB libspdk_keyring_linux.a 00:02:02.844 LIB libspdk_accel_iaa.a 00:02:02.844 LIB libspdk_keyring_file.a 00:02:02.844 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.102 LIB libspdk_blob_bdev.a 00:02:03.102 LIB libspdk_accel_dsa.a 00:02:03.102 LIB libspdk_vfu_device.a 00:02:03.102 LIB libspdk_fsdev_aio.a 00:02:03.361 LIB libspdk_sock_posix.a 00:02:03.361 CC module/bdev/delay/vbdev_delay.o 00:02:03.361 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.361 CC module/bdev/raid/bdev_raid.o 00:02:03.361 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.361 CC module/bdev/split/vbdev_split.o 00:02:03.361 CC module/bdev/gpt/gpt.o 00:02:03.361 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.361 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:03.361 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:03.361 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.361 CC module/bdev/raid/raid0.o 00:02:03.361 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.361 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:03.361 CC module/bdev/raid/concat.o 00:02:03.361 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.361 CC module/bdev/raid/raid1.o 00:02:03.361 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.361 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.361 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.361 CC module/bdev/error/vbdev_error.o 00:02:03.361 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.361 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.361 CC module/bdev/nvme/bdev_nvme.o 00:02:03.361 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.361 CC module/bdev/nvme/nvme_rpc.o 00:02:03.361 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.361 CC module/bdev/null/bdev_null.o 00:02:03.361 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.361 CC module/bdev/nvme/vbdev_opal.o 00:02:03.361 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.361 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.361 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:03.361 CC module/bdev/null/bdev_null_rpc.o 00:02:03.361 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.361 CC module/bdev/malloc/bdev_malloc.o 00:02:03.361 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.361 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.361 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.361 CC module/bdev/ftl/bdev_ftl.o 00:02:03.361 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.361 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.361 CC module/bdev/aio/bdev_aio.o 00:02:03.621 LIB libspdk_blobfs_bdev.a 00:02:03.621 LIB libspdk_bdev_split.a 00:02:03.621 LIB libspdk_bdev_gpt.a 00:02:03.621 LIB libspdk_bdev_error.a 00:02:03.621 LIB libspdk_bdev_null.a 00:02:03.621 LIB libspdk_bdev_passthru.a 00:02:03.621 LIB libspdk_bdev_ftl.a 00:02:03.621 LIB libspdk_bdev_delay.a 00:02:03.621 LIB libspdk_bdev_aio.a 00:02:03.621 LIB libspdk_bdev_iscsi.a 00:02:03.621 LIB libspdk_bdev_zone_block.a 00:02:03.621 LIB libspdk_bdev_malloc.a 00:02:03.880 LIB libspdk_bdev_lvol.a 00:02:03.880 LIB libspdk_bdev_virtio.a 00:02:04.139 LIB libspdk_bdev_raid.a 00:02:04.802 LIB libspdk_bdev_nvme.a 00:02:05.425 CC module/event/subsystems/vmd/vmd.o 00:02:05.425 CC module/event/subsystems/scheduler/scheduler.o 00:02:05.425 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:05.425 CC module/event/subsystems/fsdev/fsdev.o 00:02:05.425 CC module/event/subsystems/iobuf/iobuf.o 00:02:05.425 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:05.425 CC module/event/subsystems/keyring/keyring.o 00:02:05.425 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:05.425 CC module/event/subsystems/sock/sock.o 00:02:05.425 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:05.425 LIB libspdk_event_fsdev.a 00:02:05.425 LIB libspdk_event_scheduler.a 00:02:05.425 LIB libspdk_event_keyring.a 00:02:05.425 LIB libspdk_event_vmd.a 00:02:05.425 LIB libspdk_event_vfu_tgt.a 00:02:05.425 LIB libspdk_event_iobuf.a 00:02:05.425 LIB libspdk_event_vhost_blk.a 00:02:05.425 LIB libspdk_event_sock.a 00:02:05.685 CC module/event/subsystems/accel/accel.o 00:02:05.945 LIB libspdk_event_accel.a 00:02:06.205 CC module/event/subsystems/bdev/bdev.o 00:02:06.205 LIB libspdk_event_bdev.a 00:02:06.774 CC module/event/subsystems/scsi/scsi.o 00:02:06.774 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.774 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:06.774 CC module/event/subsystems/ublk/ublk.o 00:02:06.774 CC module/event/subsystems/nbd/nbd.o 00:02:06.774 LIB libspdk_event_scsi.a 00:02:06.774 LIB libspdk_event_ublk.a 00:02:06.774 LIB libspdk_event_nbd.a 00:02:06.774 LIB libspdk_event_nvmf.a 00:02:07.033 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.033 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.033 LIB libspdk_event_iscsi.a 00:02:07.033 LIB libspdk_event_vhost_scsi.a 00:02:07.605 CC app/trace_record/trace_record.o 00:02:07.605 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.605 CC app/spdk_nvme_identify/identify.o 00:02:07.605 CC app/spdk_nvme_perf/perf.o 00:02:07.605 CC test/rpc_client/rpc_client_test.o 00:02:07.605 CXX app/trace/trace.o 00:02:07.605 TEST_HEADER include/spdk/accel.h 00:02:07.605 TEST_HEADER include/spdk/accel_module.h 00:02:07.605 TEST_HEADER include/spdk/base64.h 00:02:07.605 TEST_HEADER include/spdk/assert.h 00:02:07.605 CC app/spdk_lspci/spdk_lspci.o 00:02:07.605 TEST_HEADER include/spdk/barrier.h 00:02:07.605 TEST_HEADER include/spdk/bdev.h 00:02:07.605 TEST_HEADER include/spdk/bit_array.h 00:02:07.605 CC app/spdk_top/spdk_top.o 00:02:07.605 TEST_HEADER include/spdk/bdev_module.h 00:02:07.605 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.605 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.605 TEST_HEADER include/spdk/blob.h 00:02:07.605 TEST_HEADER include/spdk/bit_pool.h 00:02:07.605 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.605 TEST_HEADER include/spdk/config.h 00:02:07.605 TEST_HEADER include/spdk/blobfs.h 00:02:07.605 TEST_HEADER include/spdk/cpuset.h 00:02:07.605 TEST_HEADER include/spdk/crc16.h 00:02:07.605 TEST_HEADER include/spdk/crc32.h 00:02:07.605 TEST_HEADER include/spdk/crc64.h 00:02:07.605 TEST_HEADER include/spdk/conf.h 00:02:07.605 TEST_HEADER include/spdk/endian.h 00:02:07.605 TEST_HEADER include/spdk/dif.h 00:02:07.605 TEST_HEADER include/spdk/env.h 00:02:07.605 TEST_HEADER include/spdk/dma.h 00:02:07.606 TEST_HEADER include/spdk/event.h 00:02:07.606 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.606 TEST_HEADER include/spdk/fd.h 00:02:07.606 TEST_HEADER include/spdk/fd_group.h 00:02:07.606 TEST_HEADER include/spdk/fsdev_module.h 00:02:07.606 TEST_HEADER include/spdk/ftl.h 00:02:07.606 TEST_HEADER include/spdk/file.h 00:02:07.606 TEST_HEADER include/spdk/fsdev.h 00:02:07.606 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.606 TEST_HEADER include/spdk/hexlify.h 00:02:07.606 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:07.606 TEST_HEADER include/spdk/histogram_data.h 00:02:07.606 TEST_HEADER include/spdk/idxd.h 00:02:07.606 TEST_HEADER include/spdk/init.h 00:02:07.606 TEST_HEADER include/spdk/ioat.h 00:02:07.606 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.606 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.606 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.606 TEST_HEADER include/spdk/keyring.h 00:02:07.606 TEST_HEADER include/spdk/json.h 00:02:07.606 TEST_HEADER include/spdk/likely.h 00:02:07.606 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.606 TEST_HEADER include/spdk/log.h 00:02:07.606 TEST_HEADER include/spdk/keyring_module.h 00:02:07.606 TEST_HEADER include/spdk/lvol.h 00:02:07.606 TEST_HEADER include/spdk/memory.h 00:02:07.606 TEST_HEADER include/spdk/md5.h 00:02:07.606 TEST_HEADER include/spdk/mmio.h 00:02:07.606 TEST_HEADER include/spdk/nbd.h 00:02:07.606 TEST_HEADER include/spdk/net.h 00:02:07.606 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.606 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.606 TEST_HEADER include/spdk/notify.h 00:02:07.606 TEST_HEADER include/spdk/nvme.h 00:02:07.606 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.606 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.606 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.606 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.606 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.606 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.606 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.606 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.606 TEST_HEADER include/spdk/nvmf.h 00:02:07.606 TEST_HEADER include/spdk/opal.h 00:02:07.606 CC app/spdk_dd/spdk_dd.o 00:02:07.606 TEST_HEADER include/spdk/pci_ids.h 00:02:07.606 TEST_HEADER include/spdk/opal_spec.h 00:02:07.606 TEST_HEADER include/spdk/pipe.h 00:02:07.606 TEST_HEADER include/spdk/queue.h 00:02:07.606 TEST_HEADER include/spdk/scheduler.h 00:02:07.606 TEST_HEADER include/spdk/reduce.h 00:02:07.606 TEST_HEADER include/spdk/rpc.h 00:02:07.606 TEST_HEADER include/spdk/sock.h 00:02:07.606 TEST_HEADER include/spdk/scsi.h 00:02:07.606 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.606 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.606 TEST_HEADER include/spdk/string.h 00:02:07.606 TEST_HEADER include/spdk/thread.h 00:02:07.606 TEST_HEADER include/spdk/trace.h 00:02:07.606 TEST_HEADER include/spdk/trace_parser.h 00:02:07.606 TEST_HEADER include/spdk/stdinc.h 00:02:07.606 TEST_HEADER include/spdk/ublk.h 00:02:07.606 CC app/nvmf_tgt/nvmf_main.o 00:02:07.606 TEST_HEADER include/spdk/tree.h 00:02:07.606 TEST_HEADER include/spdk/util.h 00:02:07.606 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.606 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.606 CC app/spdk_tgt/spdk_tgt.o 00:02:07.606 TEST_HEADER include/spdk/uuid.h 00:02:07.606 TEST_HEADER include/spdk/version.h 00:02:07.606 TEST_HEADER include/spdk/vhost.h 00:02:07.606 TEST_HEADER include/spdk/xor.h 00:02:07.606 TEST_HEADER include/spdk/zipf.h 00:02:07.606 TEST_HEADER include/spdk/vmd.h 00:02:07.606 CXX test/cpp_headers/accel_module.o 00:02:07.606 CXX test/cpp_headers/accel.o 00:02:07.606 CXX test/cpp_headers/assert.o 00:02:07.606 CXX test/cpp_headers/base64.o 00:02:07.606 CXX test/cpp_headers/bdev.o 00:02:07.606 CXX test/cpp_headers/barrier.o 00:02:07.606 CXX test/cpp_headers/bdev_zone.o 00:02:07.606 CXX test/cpp_headers/bit_array.o 00:02:07.606 CXX test/cpp_headers/blob_bdev.o 00:02:07.606 CXX test/cpp_headers/bdev_module.o 00:02:07.606 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.606 CXX test/cpp_headers/bit_pool.o 00:02:07.606 CXX test/cpp_headers/blob.o 00:02:07.606 CXX test/cpp_headers/blobfs.o 00:02:07.606 CXX test/cpp_headers/conf.o 00:02:07.606 CXX test/cpp_headers/config.o 00:02:07.606 CXX test/cpp_headers/crc32.o 00:02:07.606 CXX test/cpp_headers/crc16.o 00:02:07.606 CXX test/cpp_headers/cpuset.o 00:02:07.606 CXX test/cpp_headers/dif.o 00:02:07.606 CXX test/cpp_headers/crc64.o 00:02:07.606 CC test/app/stub/stub.o 00:02:07.606 CXX test/cpp_headers/endian.o 00:02:07.606 CXX test/cpp_headers/dma.o 00:02:07.606 CXX test/cpp_headers/env.o 00:02:07.606 CXX test/cpp_headers/env_dpdk.o 00:02:07.606 CXX test/cpp_headers/event.o 00:02:07.606 CXX test/cpp_headers/fd_group.o 00:02:07.606 CXX test/cpp_headers/fd.o 00:02:07.606 CXX test/cpp_headers/file.o 00:02:07.606 CXX test/cpp_headers/fsdev_module.o 00:02:07.606 CXX test/cpp_headers/ftl.o 00:02:07.606 CXX test/cpp_headers/fsdev.o 00:02:07.606 CXX test/cpp_headers/gpt_spec.o 00:02:07.606 CXX test/cpp_headers/fuse_dispatcher.o 00:02:07.606 CXX test/cpp_headers/hexlify.o 00:02:07.606 CXX test/cpp_headers/histogram_data.o 00:02:07.606 CXX test/cpp_headers/idxd.o 00:02:07.606 CXX test/cpp_headers/init.o 00:02:07.606 CXX test/cpp_headers/idxd_spec.o 00:02:07.606 CXX test/cpp_headers/ioat.o 00:02:07.606 CXX test/cpp_headers/iscsi_spec.o 00:02:07.606 CXX test/cpp_headers/ioat_spec.o 00:02:07.606 CXX test/cpp_headers/json.o 00:02:07.606 CXX test/cpp_headers/keyring.o 00:02:07.606 CC test/app/histogram_perf/histogram_perf.o 00:02:07.606 CC test/app/jsoncat/jsoncat.o 00:02:07.606 CXX test/cpp_headers/jsonrpc.o 00:02:07.606 CXX test/cpp_headers/keyring_module.o 00:02:07.606 CXX test/cpp_headers/log.o 00:02:07.606 CXX test/cpp_headers/likely.o 00:02:07.606 CXX test/cpp_headers/lvol.o 00:02:07.606 CXX test/cpp_headers/md5.o 00:02:07.606 CXX test/cpp_headers/memory.o 00:02:07.606 CXX test/cpp_headers/mmio.o 00:02:07.606 CXX test/cpp_headers/nbd.o 00:02:07.606 CXX test/cpp_headers/net.o 00:02:07.606 CXX test/cpp_headers/notify.o 00:02:07.606 CXX test/cpp_headers/nvme.o 00:02:07.606 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.606 CXX test/cpp_headers/nvme_intel.o 00:02:07.606 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.606 CXX test/cpp_headers/nvme_spec.o 00:02:07.606 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.606 CXX test/cpp_headers/nvme_zns.o 00:02:07.606 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.606 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.606 CXX test/cpp_headers/nvmf.o 00:02:07.606 CC test/env/vtophys/vtophys.o 00:02:07.606 CXX test/cpp_headers/nvmf_spec.o 00:02:07.606 CC test/thread/poller_perf/poller_perf.o 00:02:07.606 CXX test/cpp_headers/opal.o 00:02:07.606 CXX test/cpp_headers/nvmf_transport.o 00:02:07.606 CXX test/cpp_headers/opal_spec.o 00:02:07.606 CXX test/cpp_headers/pci_ids.o 00:02:07.606 CXX test/cpp_headers/pipe.o 00:02:07.606 CC examples/ioat/perf/perf.o 00:02:07.606 CXX test/cpp_headers/queue.o 00:02:07.606 CXX test/cpp_headers/reduce.o 00:02:07.606 CC examples/util/zipf/zipf.o 00:02:07.606 CXX test/cpp_headers/rpc.o 00:02:07.606 CC test/env/memory/memory_ut.o 00:02:07.606 CXX test/cpp_headers/scheduler.o 00:02:07.606 CXX test/cpp_headers/scsi.o 00:02:07.606 CXX test/cpp_headers/scsi_spec.o 00:02:07.606 CXX test/cpp_headers/sock.o 00:02:07.606 CXX test/cpp_headers/stdinc.o 00:02:07.606 CC app/fio/nvme/fio_plugin.o 00:02:07.606 CC test/env/pci/pci_ut.o 00:02:07.606 CC examples/ioat/verify/verify.o 00:02:07.606 CC test/thread/lock/spdk_lock.o 00:02:07.606 LINK spdk_lspci 00:02:07.606 CC test/app/bdev_svc/bdev_svc.o 00:02:07.606 CXX test/cpp_headers/string.o 00:02:07.606 CXX test/cpp_headers/thread.o 00:02:07.606 LINK rpc_client_test 00:02:07.606 LINK spdk_nvme_discover 00:02:07.606 CC app/fio/bdev/fio_plugin.o 00:02:07.606 CC test/dma/test_dma/test_dma.o 00:02:07.606 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.606 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.606 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.606 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.606 LINK spdk_trace_record 00:02:07.866 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:07.866 CXX test/cpp_headers/trace.o 00:02:07.866 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.866 LINK jsoncat 00:02:07.866 CXX test/cpp_headers/trace_parser.o 00:02:07.866 CXX test/cpp_headers/tree.o 00:02:07.866 LINK histogram_perf 00:02:07.866 CXX test/cpp_headers/ublk.o 00:02:07.866 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:07.866 CXX test/cpp_headers/util.o 00:02:07.866 CXX test/cpp_headers/uuid.o 00:02:07.866 LINK iscsi_tgt 00:02:07.866 CXX test/cpp_headers/version.o 00:02:07.866 LINK stub 00:02:07.866 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.866 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.866 CXX test/cpp_headers/vhost.o 00:02:07.866 LINK nvmf_tgt 00:02:07.866 CXX test/cpp_headers/vmd.o 00:02:07.866 CXX test/cpp_headers/xor.o 00:02:07.866 CXX test/cpp_headers/zipf.o 00:02:07.866 LINK vtophys 00:02:07.866 LINK interrupt_tgt 00:02:07.866 LINK poller_perf 00:02:07.866 LINK zipf 00:02:07.866 LINK env_dpdk_post_init 00:02:07.866 LINK spdk_tgt 00:02:07.866 LINK bdev_svc 00:02:07.866 LINK verify 00:02:07.866 LINK ioat_perf 00:02:07.866 LINK spdk_trace 00:02:08.125 LINK nvme_fuzz 00:02:08.125 LINK llvm_vfio_fuzz 00:02:08.125 LINK vhost_fuzz 00:02:08.125 LINK spdk_dd 00:02:08.125 LINK pci_ut 00:02:08.125 LINK spdk_nvme_identify 00:02:08.125 LINK spdk_nvme 00:02:08.125 LINK spdk_nvme_perf 00:02:08.125 LINK test_dma 00:02:08.125 LINK spdk_bdev 00:02:08.125 LINK spdk_top 00:02:08.125 LINK mem_callbacks 00:02:08.384 LINK llvm_nvme_fuzz 00:02:08.384 CC examples/vmd/led/led.o 00:02:08.384 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.384 CC examples/idxd/perf/perf.o 00:02:08.384 CC examples/thread/thread/thread_ex.o 00:02:08.384 CC app/vhost/vhost.o 00:02:08.384 CC examples/sock/hello_world/hello_sock.o 00:02:08.384 LINK led 00:02:08.384 LINK lsvmd 00:02:08.643 LINK hello_sock 00:02:08.643 LINK memory_ut 00:02:08.643 LINK vhost 00:02:08.643 LINK idxd_perf 00:02:08.643 LINK thread 00:02:08.643 LINK spdk_lock 00:02:08.643 LINK iscsi_fuzz 00:02:09.211 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.211 CC examples/nvme/hotplug/hotplug.o 00:02:09.211 CC examples/nvme/arbitration/arbitration.o 00:02:09.211 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.211 CC examples/nvme/hello_world/hello_world.o 00:02:09.211 CC examples/nvme/reconnect/reconnect.o 00:02:09.211 CC examples/nvme/abort/abort.o 00:02:09.211 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.211 CC test/event/reactor/reactor.o 00:02:09.211 CC test/event/reactor_perf/reactor_perf.o 00:02:09.211 CC test/event/event_perf/event_perf.o 00:02:09.211 CC test/event/app_repeat/app_repeat.o 00:02:09.211 CC test/event/scheduler/scheduler.o 00:02:09.471 LINK cmb_copy 00:02:09.471 LINK hello_world 00:02:09.471 LINK pmr_persistence 00:02:09.471 LINK hotplug 00:02:09.471 LINK event_perf 00:02:09.471 LINK reactor 00:02:09.471 LINK reactor_perf 00:02:09.471 LINK app_repeat 00:02:09.471 LINK reconnect 00:02:09.471 LINK arbitration 00:02:09.471 LINK abort 00:02:09.471 LINK nvme_manage 00:02:09.471 LINK scheduler 00:02:09.728 CC test/nvme/compliance/nvme_compliance.o 00:02:09.728 CC test/nvme/connect_stress/connect_stress.o 00:02:09.728 CC test/nvme/aer/aer.o 00:02:09.728 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:09.728 CC test/nvme/cuse/cuse.o 00:02:09.728 CC test/nvme/reset/reset.o 00:02:09.728 CC test/nvme/fused_ordering/fused_ordering.o 00:02:09.728 CC test/accel/dif/dif.o 00:02:09.728 CC test/nvme/err_injection/err_injection.o 00:02:09.728 CC test/nvme/startup/startup.o 00:02:09.728 CC test/nvme/reserve/reserve.o 00:02:09.728 CC test/nvme/e2edp/nvme_dp.o 00:02:09.728 CC test/nvme/simple_copy/simple_copy.o 00:02:09.728 CC test/nvme/boot_partition/boot_partition.o 00:02:09.728 CC test/nvme/overhead/overhead.o 00:02:09.728 CC test/nvme/sgl/sgl.o 00:02:09.728 CC test/nvme/fdp/fdp.o 00:02:09.728 CC test/blobfs/mkfs/mkfs.o 00:02:09.985 CC test/lvol/esnap/esnap.o 00:02:09.985 LINK connect_stress 00:02:09.985 LINK doorbell_aers 00:02:09.985 LINK boot_partition 00:02:09.985 LINK startup 00:02:09.985 LINK fused_ordering 00:02:09.985 LINK reserve 00:02:09.985 LINK err_injection 00:02:09.985 LINK aer 00:02:09.985 LINK simple_copy 00:02:09.985 LINK nvme_dp 00:02:09.985 LINK reset 00:02:09.985 LINK fdp 00:02:09.985 LINK overhead 00:02:09.985 LINK sgl 00:02:09.985 LINK mkfs 00:02:09.985 LINK nvme_compliance 00:02:10.244 CC examples/accel/perf/accel_perf.o 00:02:10.244 LINK dif 00:02:10.244 CC examples/blob/hello_world/hello_blob.o 00:02:10.244 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:10.244 CC examples/blob/cli/blobcli.o 00:02:10.502 LINK hello_blob 00:02:10.502 LINK hello_fsdev 00:02:10.502 LINK accel_perf 00:02:10.502 LINK blobcli 00:02:10.502 LINK cuse 00:02:11.438 CC examples/bdev/hello_world/hello_bdev.o 00:02:11.438 CC examples/bdev/bdevperf/bdevperf.o 00:02:11.438 LINK hello_bdev 00:02:11.696 LINK bdevperf 00:02:11.696 CC test/bdev/bdevio/bdevio.o 00:02:11.954 LINK bdevio 00:02:13.334 CC examples/nvmf/nvmf/nvmf.o 00:02:13.334 LINK esnap 00:02:13.334 LINK nvmf 00:02:14.714 00:02:14.714 real 0m44.686s 00:02:14.714 user 6m16.547s 00:02:14.714 sys 2m27.979s 00:02:14.714 17:51:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:14.715 17:51:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:14.715 ************************************ 00:02:14.715 END TEST make 00:02:14.715 ************************************ 00:02:14.715 17:51:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:14.715 17:51:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:14.715 17:51:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:14.715 17:51:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.715 17:51:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:14.715 17:51:35 -- pm/common@44 -- $ pid=1350830 00:02:14.715 17:51:35 -- pm/common@50 -- $ kill -TERM 1350830 00:02:14.715 17:51:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.715 17:51:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:14.715 17:51:35 -- pm/common@44 -- $ pid=1350832 00:02:14.715 17:51:35 -- pm/common@50 -- $ kill -TERM 1350832 00:02:14.715 17:51:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.715 17:51:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:14.715 17:51:35 -- pm/common@44 -- $ pid=1350834 00:02:14.715 17:51:35 -- pm/common@50 -- $ kill -TERM 1350834 00:02:14.715 17:51:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.715 17:51:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:14.715 17:51:35 -- pm/common@44 -- $ pid=1350860 00:02:14.715 17:51:35 -- pm/common@50 -- $ sudo -E kill -TERM 1350860 00:02:14.715 17:51:36 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:14.715 17:51:36 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:14.715 17:51:36 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:14.715 17:51:36 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:14.715 17:51:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:14.715 17:51:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:14.715 17:51:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:14.715 17:51:36 -- scripts/common.sh@336 -- # IFS=.-: 00:02:14.715 17:51:36 -- scripts/common.sh@336 -- # read -ra ver1 00:02:14.715 17:51:36 -- scripts/common.sh@337 -- # IFS=.-: 00:02:14.715 17:51:36 -- scripts/common.sh@337 -- # read -ra ver2 00:02:14.715 17:51:36 -- scripts/common.sh@338 -- # local 'op=<' 00:02:14.715 17:51:36 -- scripts/common.sh@340 -- # ver1_l=2 00:02:14.715 17:51:36 -- scripts/common.sh@341 -- # ver2_l=1 00:02:14.715 17:51:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:14.715 17:51:36 -- scripts/common.sh@344 -- # case "$op" in 00:02:14.715 17:51:36 -- scripts/common.sh@345 -- # : 1 00:02:14.715 17:51:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:14.715 17:51:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.715 17:51:36 -- scripts/common.sh@365 -- # decimal 1 00:02:14.715 17:51:36 -- scripts/common.sh@353 -- # local d=1 00:02:14.715 17:51:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:14.715 17:51:36 -- scripts/common.sh@355 -- # echo 1 00:02:14.715 17:51:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:14.715 17:51:36 -- scripts/common.sh@366 -- # decimal 2 00:02:14.715 17:51:36 -- scripts/common.sh@353 -- # local d=2 00:02:14.715 17:51:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:14.715 17:51:36 -- scripts/common.sh@355 -- # echo 2 00:02:14.715 17:51:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:14.715 17:51:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:14.715 17:51:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:14.715 17:51:36 -- scripts/common.sh@368 -- # return 0 00:02:14.715 17:51:36 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:14.715 17:51:36 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:14.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:14.715 --rc genhtml_branch_coverage=1 00:02:14.715 --rc genhtml_function_coverage=1 00:02:14.715 --rc genhtml_legend=1 00:02:14.715 --rc geninfo_all_blocks=1 00:02:14.715 --rc geninfo_unexecuted_blocks=1 00:02:14.715 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:14.715 ' 00:02:14.715 17:51:36 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:14.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:14.715 --rc genhtml_branch_coverage=1 00:02:14.715 --rc genhtml_function_coverage=1 00:02:14.715 --rc genhtml_legend=1 00:02:14.715 --rc geninfo_all_blocks=1 00:02:14.715 --rc geninfo_unexecuted_blocks=1 00:02:14.715 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:14.715 ' 00:02:14.715 17:51:36 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:14.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:14.715 --rc genhtml_branch_coverage=1 00:02:14.715 --rc genhtml_function_coverage=1 00:02:14.715 --rc genhtml_legend=1 00:02:14.715 --rc geninfo_all_blocks=1 00:02:14.715 --rc geninfo_unexecuted_blocks=1 00:02:14.715 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:14.715 ' 00:02:14.715 17:51:36 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:14.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:14.715 --rc genhtml_branch_coverage=1 00:02:14.715 --rc genhtml_function_coverage=1 00:02:14.715 --rc genhtml_legend=1 00:02:14.715 --rc geninfo_all_blocks=1 00:02:14.715 --rc geninfo_unexecuted_blocks=1 00:02:14.715 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:14.715 ' 00:02:14.715 17:51:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:14.715 17:51:36 -- nvmf/common.sh@7 -- # uname -s 00:02:14.715 17:51:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:14.715 17:51:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:14.715 17:51:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:14.715 17:51:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:14.715 17:51:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:14.715 17:51:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:14.715 17:51:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:14.715 17:51:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:14.715 17:51:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:14.715 17:51:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:14.715 17:51:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:14.715 17:51:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:14.715 17:51:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:14.715 17:51:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:14.715 17:51:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:14.715 17:51:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:14.715 17:51:36 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:14.715 17:51:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:14.715 17:51:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:14.715 17:51:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.715 17:51:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.715 17:51:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.715 17:51:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.715 17:51:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.715 17:51:36 -- paths/export.sh@5 -- # export PATH 00:02:14.715 17:51:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.715 17:51:36 -- nvmf/common.sh@51 -- # : 0 00:02:14.715 17:51:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:14.715 17:51:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:14.715 17:51:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:14.715 17:51:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:14.715 17:51:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:14.715 17:51:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:14.715 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:14.715 17:51:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:14.715 17:51:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:14.715 17:51:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:14.715 17:51:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:14.715 17:51:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:14.715 17:51:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:14.715 17:51:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:14.715 17:51:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:14.976 17:51:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:14.976 17:51:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:14.976 17:51:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:14.976 17:51:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:14.976 17:51:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:14.976 17:51:36 -- spdk/autotest.sh@48 -- # udevadm_pid=1414184 00:02:14.976 17:51:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:14.976 17:51:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:14.976 17:51:36 -- pm/common@17 -- # local monitor 00:02:14.976 17:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.976 17:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.976 17:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.976 17:51:36 -- pm/common@21 -- # date +%s 00:02:14.976 17:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.976 17:51:36 -- pm/common@21 -- # date +%s 00:02:14.976 17:51:36 -- pm/common@25 -- # sleep 1 00:02:14.976 17:51:36 -- pm/common@21 -- # date +%s 00:02:14.976 17:51:36 -- pm/common@21 -- # date +%s 00:02:14.976 17:51:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728143496 00:02:14.976 17:51:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728143496 00:02:14.976 17:51:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728143496 00:02:14.976 17:51:36 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728143496 00:02:14.976 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728143496_collect-vmstat.pm.log 00:02:14.976 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728143496_collect-cpu-load.pm.log 00:02:14.976 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728143496_collect-cpu-temp.pm.log 00:02:14.976 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728143496_collect-bmc-pm.bmc.pm.log 00:02:15.915 17:51:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:15.915 17:51:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:15.915 17:51:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:15.915 17:51:37 -- common/autotest_common.sh@10 -- # set +x 00:02:15.915 17:51:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:15.915 17:51:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:15.915 17:51:37 -- common/autotest_common.sh@10 -- # set +x 00:02:15.915 17:51:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:15.915 17:51:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:15.915 17:51:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:15.915 17:51:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:15.915 17:51:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:15.915 17:51:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:15.915 17:51:37 -- common/autotest_common.sh@1455 -- # uname 00:02:15.915 17:51:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:15.915 17:51:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:15.915 17:51:37 -- common/autotest_common.sh@1475 -- # uname 00:02:15.915 17:51:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:15.915 17:51:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:15.915 17:51:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:15.915 lcov: LCOV version 1.15 00:02:15.915 17:51:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:24.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:24.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:02:32.170 17:51:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:32.170 17:51:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:32.170 17:51:52 -- common/autotest_common.sh@10 -- # set +x 00:02:32.170 17:51:52 -- spdk/autotest.sh@78 -- # rm -f 00:02:32.170 17:51:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.463 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.463 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:35.463 17:51:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:35.463 17:51:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:35.463 17:51:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:35.463 17:51:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:35.463 17:51:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:35.463 17:51:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:35.463 17:51:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:35.463 17:51:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.463 17:51:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:35.463 17:51:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:35.463 17:51:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:35.463 17:51:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:35.463 17:51:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:35.463 17:51:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:35.463 17:51:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:35.463 No valid GPT data, bailing 00:02:35.463 17:51:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:35.463 17:51:56 -- scripts/common.sh@394 -- # pt= 00:02:35.463 17:51:56 -- scripts/common.sh@395 -- # return 1 00:02:35.463 17:51:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:35.463 1+0 records in 00:02:35.463 1+0 records out 00:02:35.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680716 s, 154 MB/s 00:02:35.463 17:51:56 -- spdk/autotest.sh@105 -- # sync 00:02:35.463 17:51:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:35.463 17:51:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:35.463 17:51:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:43.589 17:52:04 -- spdk/autotest.sh@111 -- # uname -s 00:02:43.589 17:52:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:43.589 17:52:04 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:02:43.589 17:52:04 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.589 17:52:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.589 17:52:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.589 17:52:04 -- common/autotest_common.sh@10 -- # set +x 00:02:43.589 ************************************ 00:02:43.589 START TEST setup.sh 00:02:43.589 ************************************ 00:02:43.589 17:52:04 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.589 * Looking for test storage... 00:02:43.589 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:43.589 17:52:04 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:43.589 17:52:04 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:02:43.589 17:52:04 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:43.589 17:52:04 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@345 -- # : 1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@353 -- # local d=1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@355 -- # echo 1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@353 -- # local d=2 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.589 17:52:04 setup.sh -- scripts/common.sh@355 -- # echo 2 00:02:43.590 17:52:04 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.590 17:52:04 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.590 17:52:04 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.590 17:52:04 setup.sh -- scripts/common.sh@368 -- # return 0 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:43.590 17:52:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:43.590 17:52:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.590 17:52:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.590 ************************************ 00:02:43.590 START TEST acl 00:02:43.590 ************************************ 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:43.590 * Looking for test storage... 00:02:43.590 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.590 17:52:04 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.590 --rc genhtml_branch_coverage=1 00:02:43.590 --rc genhtml_function_coverage=1 00:02:43.590 --rc genhtml_legend=1 00:02:43.590 --rc geninfo_all_blocks=1 00:02:43.590 --rc geninfo_unexecuted_blocks=1 00:02:43.590 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.590 ' 00:02:43.590 17:52:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:43.590 17:52:04 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:43.591 17:52:04 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:43.591 17:52:04 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:43.591 17:52:04 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:43.591 17:52:04 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.591 17:52:04 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:43.591 17:52:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:43.591 17:52:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:43.591 17:52:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:43.591 17:52:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:43.591 17:52:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:43.591 17:52:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.591 17:52:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.928 17:52:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:47.928 17:52:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:47.928 17:52:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.928 17:52:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:47.928 17:52:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.928 17:52:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:50.461 Hugepages 00:02:50.461 node hugesize free / total 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.461 00:02:50.461 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.461 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:50.462 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:50.720 17:52:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:50.720 17:52:12 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:50.720 17:52:12 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:50.720 17:52:12 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:50.720 17:52:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.720 ************************************ 00:02:50.720 START TEST denied 00:02:50.720 ************************************ 00:02:50.720 17:52:12 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:50.720 17:52:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:50.720 17:52:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:50.720 17:52:12 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:50.720 17:52:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.720 17:52:12 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:54.908 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.908 17:52:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.096 00:02:59.096 real 0m7.885s 00:02:59.096 user 0m2.486s 00:02:59.096 sys 0m4.686s 00:02:59.096 17:52:19 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:59.096 17:52:19 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:59.096 ************************************ 00:02:59.096 END TEST denied 00:02:59.096 ************************************ 00:02:59.096 17:52:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:59.096 17:52:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.096 17:52:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.096 17:52:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.096 ************************************ 00:02:59.096 START TEST allowed 00:02:59.096 ************************************ 00:02:59.096 17:52:20 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:59.096 17:52:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:59.096 17:52:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:59.096 17:52:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:59.096 17:52:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.096 17:52:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:04.368 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:04.368 17:52:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:04.368 17:52:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:04.368 17:52:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:04.368 17:52:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.368 17:52:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.661 00:03:07.661 real 0m8.986s 00:03:07.661 user 0m2.570s 00:03:07.661 sys 0m4.996s 00:03:07.661 17:52:29 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:07.661 17:52:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:07.661 ************************************ 00:03:07.661 END TEST allowed 00:03:07.661 ************************************ 00:03:07.661 00:03:07.661 real 0m24.578s 00:03:07.661 user 0m7.816s 00:03:07.661 sys 0m14.916s 00:03:07.661 17:52:29 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:07.661 17:52:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.661 ************************************ 00:03:07.661 END TEST acl 00:03:07.661 ************************************ 00:03:07.661 17:52:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.661 17:52:29 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:07.661 17:52:29 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:07.661 17:52:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.922 ************************************ 00:03:07.922 START TEST hugepages 00:03:07.922 ************************************ 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.922 * Looking for test storage... 00:03:07.922 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:07.922 17:52:29 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.922 --rc genhtml_branch_coverage=1 00:03:07.922 --rc genhtml_function_coverage=1 00:03:07.922 --rc genhtml_legend=1 00:03:07.922 --rc geninfo_all_blocks=1 00:03:07.922 --rc geninfo_unexecuted_blocks=1 00:03:07.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.922 ' 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.922 --rc genhtml_branch_coverage=1 00:03:07.922 --rc genhtml_function_coverage=1 00:03:07.922 --rc genhtml_legend=1 00:03:07.922 --rc geninfo_all_blocks=1 00:03:07.922 --rc geninfo_unexecuted_blocks=1 00:03:07.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.922 ' 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.922 --rc genhtml_branch_coverage=1 00:03:07.922 --rc genhtml_function_coverage=1 00:03:07.922 --rc genhtml_legend=1 00:03:07.922 --rc geninfo_all_blocks=1 00:03:07.922 --rc geninfo_unexecuted_blocks=1 00:03:07.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.922 ' 00:03:07.922 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:07.922 --rc genhtml_branch_coverage=1 00:03:07.922 --rc genhtml_function_coverage=1 00:03:07.922 --rc genhtml_legend=1 00:03:07.922 --rc geninfo_all_blocks=1 00:03:07.922 --rc geninfo_unexecuted_blocks=1 00:03:07.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:07.922 ' 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.922 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 40950312 kB' 'MemAvailable: 42834360 kB' 'Buffers: 4384 kB' 'Cached: 10812128 kB' 'SwapCached: 0 kB' 'Active: 9111240 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610028 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516004 kB' 'Mapped: 214404 kB' 'Shmem: 8535684 kB' 'KReclaimable: 272868 kB' 'Slab: 1246012 kB' 'SReclaimable: 272868 kB' 'SUnreclaim: 973144 kB' 'KernelStack: 21936 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433340 kB' 'Committed_AS: 10295536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216800 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.923 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:07.924 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:08.184 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:08.185 17:52:29 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:08.185 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:08.185 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:08.185 17:52:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.185 ************************************ 00:03:08.185 START TEST single_node_setup 00:03:08.185 ************************************ 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.185 17:52:29 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:11.472 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.851 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43152116 kB' 'MemAvailable: 45036144 kB' 'Buffers: 4384 kB' 'Cached: 10812264 kB' 'SwapCached: 0 kB' 'Active: 9113364 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612152 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518056 kB' 'Mapped: 213976 kB' 'Shmem: 8535820 kB' 'KReclaimable: 272828 kB' 'Slab: 1244092 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971264 kB' 'KernelStack: 22368 kB' 'PageTables: 10052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10296376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217056 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.118 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43153312 kB' 'MemAvailable: 45037340 kB' 'Buffers: 4384 kB' 'Cached: 10812264 kB' 'SwapCached: 0 kB' 'Active: 9112244 kB' 'Inactive: 2217824 kB' 'Active(anon): 8611032 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516784 kB' 'Mapped: 213976 kB' 'Shmem: 8535820 kB' 'KReclaimable: 272828 kB' 'Slab: 1244044 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971216 kB' 'KernelStack: 22096 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10296396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217008 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.119 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.120 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43154460 kB' 'MemAvailable: 45038488 kB' 'Buffers: 4384 kB' 'Cached: 10812280 kB' 'SwapCached: 0 kB' 'Active: 9112408 kB' 'Inactive: 2217824 kB' 'Active(anon): 8611196 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516884 kB' 'Mapped: 213992 kB' 'Shmem: 8535836 kB' 'KReclaimable: 272828 kB' 'Slab: 1244332 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971504 kB' 'KernelStack: 22032 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10303192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216944 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.121 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.122 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:13.123 nr_hugepages=1024 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:13.123 resv_hugepages=0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:13.123 surplus_hugepages=0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:13.123 anon_hugepages=0 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.123 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43153324 kB' 'MemAvailable: 45037352 kB' 'Buffers: 4384 kB' 'Cached: 10812308 kB' 'SwapCached: 0 kB' 'Active: 9111760 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610548 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516180 kB' 'Mapped: 213976 kB' 'Shmem: 8535864 kB' 'KReclaimable: 272828 kB' 'Slab: 1244340 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971512 kB' 'KernelStack: 22112 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10296192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217040 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.124 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 24653916 kB' 'MemUsed: 7931452 kB' 'SwapCached: 0 kB' 'Active: 3870572 kB' 'Inactive: 268160 kB' 'Active(anon): 3583832 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803304 kB' 'Mapped: 100148 kB' 'AnonPages: 338588 kB' 'Shmem: 3248404 kB' 'KernelStack: 12984 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 581756 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.125 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.126 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:13.127 node0=1024 expecting 1024 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.127 00:03:13.127 real 0m5.077s 00:03:13.127 user 0m1.179s 00:03:13.127 sys 0m2.297s 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:13.127 17:52:34 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:13.127 ************************************ 00:03:13.127 END TEST single_node_setup 00:03:13.127 ************************************ 00:03:13.127 17:52:34 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:13.127 17:52:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.127 17:52:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.127 17:52:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.387 ************************************ 00:03:13.387 START TEST even_2G_alloc 00:03:13.387 ************************************ 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.387 17:52:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:16.679 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.679 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43206172 kB' 'MemAvailable: 45090200 kB' 'Buffers: 4384 kB' 'Cached: 10812428 kB' 'SwapCached: 0 kB' 'Active: 9111492 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610280 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515888 kB' 'Mapped: 213044 kB' 'Shmem: 8535984 kB' 'KReclaimable: 272828 kB' 'Slab: 1244240 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971412 kB' 'KernelStack: 21872 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10287176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.679 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.680 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43205024 kB' 'MemAvailable: 45089052 kB' 'Buffers: 4384 kB' 'Cached: 10812432 kB' 'SwapCached: 0 kB' 'Active: 9111624 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610412 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515940 kB' 'Mapped: 212944 kB' 'Shmem: 8535988 kB' 'KReclaimable: 272828 kB' 'Slab: 1244160 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971332 kB' 'KernelStack: 21840 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10287192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216992 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.681 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.682 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43204868 kB' 'MemAvailable: 45088896 kB' 'Buffers: 4384 kB' 'Cached: 10812452 kB' 'SwapCached: 0 kB' 'Active: 9111608 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610396 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515940 kB' 'Mapped: 212944 kB' 'Shmem: 8536008 kB' 'KReclaimable: 272828 kB' 'Slab: 1244160 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971332 kB' 'KernelStack: 21840 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10287216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216992 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.683 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.684 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:16.685 nr_hugepages=1024 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:16.685 resv_hugepages=0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:16.685 surplus_hugepages=0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:16.685 anon_hugepages=0 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43204868 kB' 'MemAvailable: 45088896 kB' 'Buffers: 4384 kB' 'Cached: 10812472 kB' 'SwapCached: 0 kB' 'Active: 9111620 kB' 'Inactive: 2217824 kB' 'Active(anon): 8610408 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515936 kB' 'Mapped: 212944 kB' 'Shmem: 8536028 kB' 'KReclaimable: 272828 kB' 'Slab: 1244156 kB' 'SReclaimable: 272828 kB' 'SUnreclaim: 971328 kB' 'KernelStack: 21840 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10287236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216992 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.685 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:16.686 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25710112 kB' 'MemUsed: 6875256 kB' 'SwapCached: 0 kB' 'Active: 3870728 kB' 'Inactive: 268160 kB' 'Active(anon): 3583988 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803424 kB' 'Mapped: 99648 kB' 'AnonPages: 338704 kB' 'Shmem: 3248524 kB' 'KernelStack: 13016 kB' 'PageTables: 5588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 581648 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.687 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698412 kB' 'MemFree: 17495196 kB' 'MemUsed: 10203216 kB' 'SwapCached: 0 kB' 'Active: 5240740 kB' 'Inactive: 1949664 kB' 'Active(anon): 5026268 kB' 'Inactive(anon): 438208 kB' 'Active(file): 214472 kB' 'Inactive(file): 1511456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013456 kB' 'Mapped: 113296 kB' 'AnonPages: 177000 kB' 'Shmem: 5287528 kB' 'KernelStack: 8808 kB' 'PageTables: 2664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152292 kB' 'Slab: 662508 kB' 'SReclaimable: 152292 kB' 'SUnreclaim: 510216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.688 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:16.689 node0=512 expecting 512 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:16.689 node1=512 expecting 512 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:16.689 00:03:16.689 real 0m3.440s 00:03:16.689 user 0m1.262s 00:03:16.689 sys 0m2.196s 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.689 17:52:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.690 ************************************ 00:03:16.690 END TEST even_2G_alloc 00:03:16.690 ************************************ 00:03:16.690 17:52:38 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:16.690 17:52:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.690 17:52:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.690 17:52:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.690 ************************************ 00:03:16.690 START TEST odd_alloc 00:03:16.690 ************************************ 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.690 17:52:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.978 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.978 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.978 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.978 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.978 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.978 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.979 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43246356 kB' 'MemAvailable: 45130368 kB' 'Buffers: 4384 kB' 'Cached: 10812596 kB' 'SwapCached: 0 kB' 'Active: 9113696 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612484 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516884 kB' 'Mapped: 213080 kB' 'Shmem: 8536152 kB' 'KReclaimable: 272796 kB' 'Slab: 1244656 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971860 kB' 'KernelStack: 21872 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 10287868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217056 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.979 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:19.980 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43247796 kB' 'MemAvailable: 45131808 kB' 'Buffers: 4384 kB' 'Cached: 10812596 kB' 'SwapCached: 0 kB' 'Active: 9113584 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612372 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517228 kB' 'Mapped: 213080 kB' 'Shmem: 8536152 kB' 'KReclaimable: 272796 kB' 'Slab: 1244656 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971860 kB' 'KernelStack: 21856 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 10287884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.243 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.244 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43248268 kB' 'MemAvailable: 45132280 kB' 'Buffers: 4384 kB' 'Cached: 10812616 kB' 'SwapCached: 0 kB' 'Active: 9113100 kB' 'Inactive: 2217824 kB' 'Active(anon): 8611888 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517240 kB' 'Mapped: 213004 kB' 'Shmem: 8536172 kB' 'KReclaimable: 272796 kB' 'Slab: 1244600 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971804 kB' 'KernelStack: 21872 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 10287904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.245 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.246 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:20.247 nr_hugepages=1025 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:20.247 resv_hugepages=0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:20.247 surplus_hugepages=0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:20.247 anon_hugepages=0 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43248016 kB' 'MemAvailable: 45132028 kB' 'Buffers: 4384 kB' 'Cached: 10812656 kB' 'SwapCached: 0 kB' 'Active: 9112772 kB' 'Inactive: 2217824 kB' 'Active(anon): 8611560 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516824 kB' 'Mapped: 213004 kB' 'Shmem: 8536212 kB' 'KReclaimable: 272796 kB' 'Slab: 1244600 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971804 kB' 'KernelStack: 21856 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 10287924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25723952 kB' 'MemUsed: 6861416 kB' 'SwapCached: 0 kB' 'Active: 3871008 kB' 'Inactive: 268160 kB' 'Active(anon): 3584268 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803552 kB' 'Mapped: 99660 kB' 'AnonPages: 338868 kB' 'Shmem: 3248652 kB' 'KernelStack: 13080 kB' 'PageTables: 5752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 582212 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.250 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698412 kB' 'MemFree: 17524064 kB' 'MemUsed: 10174348 kB' 'SwapCached: 0 kB' 'Active: 5242152 kB' 'Inactive: 1949664 kB' 'Active(anon): 5027680 kB' 'Inactive(anon): 438208 kB' 'Active(file): 214472 kB' 'Inactive(file): 1511456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013492 kB' 'Mapped: 113344 kB' 'AnonPages: 178376 kB' 'Shmem: 5287564 kB' 'KernelStack: 8792 kB' 'PageTables: 2612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152260 kB' 'Slab: 662388 kB' 'SReclaimable: 152260 kB' 'SUnreclaim: 510128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.251 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:20.252 node0=513 expecting 513 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:20.252 node1=512 expecting 512 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.252 00:03:20.252 real 0m3.500s 00:03:20.252 user 0m1.346s 00:03:20.252 sys 0m2.203s 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.252 17:52:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.252 ************************************ 00:03:20.252 END TEST odd_alloc 00:03:20.252 ************************************ 00:03:20.252 17:52:41 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:20.252 17:52:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.252 17:52:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.252 17:52:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.252 ************************************ 00:03:20.252 START TEST custom_alloc 00:03:20.252 ************************************ 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:20.252 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.253 17:52:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:23.544 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.544 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42207368 kB' 'MemAvailable: 44091380 kB' 'Buffers: 4384 kB' 'Cached: 10812768 kB' 'SwapCached: 0 kB' 'Active: 9114516 kB' 'Inactive: 2217824 kB' 'Active(anon): 8613304 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517920 kB' 'Mapped: 213056 kB' 'Shmem: 8536324 kB' 'KReclaimable: 272796 kB' 'Slab: 1244732 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971936 kB' 'KernelStack: 21856 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 10288560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216976 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.544 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.545 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42206840 kB' 'MemAvailable: 44090852 kB' 'Buffers: 4384 kB' 'Cached: 10812772 kB' 'SwapCached: 0 kB' 'Active: 9113772 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612560 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517644 kB' 'Mapped: 212960 kB' 'Shmem: 8536328 kB' 'KReclaimable: 272796 kB' 'Slab: 1244668 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971872 kB' 'KernelStack: 21840 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 10288576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216960 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.546 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.547 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42207344 kB' 'MemAvailable: 44091356 kB' 'Buffers: 4384 kB' 'Cached: 10812788 kB' 'SwapCached: 0 kB' 'Active: 9114056 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612844 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517984 kB' 'Mapped: 212960 kB' 'Shmem: 8536344 kB' 'KReclaimable: 272796 kB' 'Slab: 1244668 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971872 kB' 'KernelStack: 21840 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 10288744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216944 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.548 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.549 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:23.550 nr_hugepages=1536 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:23.550 resv_hugepages=0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:23.550 surplus_hugepages=0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:23.550 anon_hugepages=0 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.550 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42207780 kB' 'MemAvailable: 44091792 kB' 'Buffers: 4384 kB' 'Cached: 10812816 kB' 'SwapCached: 0 kB' 'Active: 9113488 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612276 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517384 kB' 'Mapped: 212964 kB' 'Shmem: 8536372 kB' 'KReclaimable: 272796 kB' 'Slab: 1244660 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971864 kB' 'KernelStack: 21808 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 10288620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216912 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.551 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.552 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25746760 kB' 'MemUsed: 6838608 kB' 'SwapCached: 0 kB' 'Active: 3870360 kB' 'Inactive: 268160 kB' 'Active(anon): 3583620 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803652 kB' 'Mapped: 99668 kB' 'AnonPages: 338036 kB' 'Shmem: 3248752 kB' 'KernelStack: 13016 kB' 'PageTables: 5588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 582084 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.553 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698412 kB' 'MemFree: 16464276 kB' 'MemUsed: 11234136 kB' 'SwapCached: 0 kB' 'Active: 5243448 kB' 'Inactive: 1949664 kB' 'Active(anon): 5028976 kB' 'Inactive(anon): 438208 kB' 'Active(file): 214472 kB' 'Inactive(file): 1511456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013584 kB' 'Mapped: 113296 kB' 'AnonPages: 179616 kB' 'Shmem: 5287656 kB' 'KernelStack: 8824 kB' 'PageTables: 2708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152260 kB' 'Slab: 662576 kB' 'SReclaimable: 152260 kB' 'SUnreclaim: 510316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.554 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:23.555 node0=512 expecting 512 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:23.555 node1=1024 expecting 1024 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:23.555 00:03:23.555 real 0m3.192s 00:03:23.555 user 0m1.144s 00:03:23.555 sys 0m2.057s 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.555 17:52:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.555 ************************************ 00:03:23.555 END TEST custom_alloc 00:03:23.555 ************************************ 00:03:23.555 17:52:44 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:23.555 17:52:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.556 17:52:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.556 17:52:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.556 ************************************ 00:03:23.556 START TEST no_shrink_alloc 00:03:23.556 ************************************ 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.556 17:52:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.941 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43260852 kB' 'MemAvailable: 45144864 kB' 'Buffers: 4384 kB' 'Cached: 10812940 kB' 'SwapCached: 0 kB' 'Active: 9115876 kB' 'Inactive: 2217824 kB' 'Active(anon): 8614664 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519528 kB' 'Mapped: 213016 kB' 'Shmem: 8536496 kB' 'KReclaimable: 272796 kB' 'Slab: 1244408 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971612 kB' 'KernelStack: 21920 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10291880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217136 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.941 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.942 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43263632 kB' 'MemAvailable: 45147644 kB' 'Buffers: 4384 kB' 'Cached: 10812944 kB' 'SwapCached: 0 kB' 'Active: 9115184 kB' 'Inactive: 2217824 kB' 'Active(anon): 8613972 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518824 kB' 'Mapped: 212992 kB' 'Shmem: 8536500 kB' 'KReclaimable: 272796 kB' 'Slab: 1244512 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971716 kB' 'KernelStack: 21840 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10290396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.943 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.944 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43261448 kB' 'MemAvailable: 45145460 kB' 'Buffers: 4384 kB' 'Cached: 10812960 kB' 'SwapCached: 0 kB' 'Active: 9115852 kB' 'Inactive: 2217824 kB' 'Active(anon): 8614640 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519508 kB' 'Mapped: 212992 kB' 'Shmem: 8536516 kB' 'KReclaimable: 272796 kB' 'Slab: 1244512 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971716 kB' 'KernelStack: 21952 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10291672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217104 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.945 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.946 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:26.948 nr_hugepages=1024 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:26.948 resv_hugepages=0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:26.948 surplus_hugepages=0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:26.948 anon_hugepages=0 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43260756 kB' 'MemAvailable: 45144768 kB' 'Buffers: 4384 kB' 'Cached: 10812984 kB' 'SwapCached: 0 kB' 'Active: 9115696 kB' 'Inactive: 2217824 kB' 'Active(anon): 8614484 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519268 kB' 'Mapped: 212992 kB' 'Shmem: 8536540 kB' 'KReclaimable: 272796 kB' 'Slab: 1244512 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971716 kB' 'KernelStack: 22000 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10291696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217168 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.948 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.208 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.209 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 24705296 kB' 'MemUsed: 7880072 kB' 'SwapCached: 0 kB' 'Active: 3870852 kB' 'Inactive: 268160 kB' 'Active(anon): 3584112 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803668 kB' 'Mapped: 99680 kB' 'AnonPages: 338420 kB' 'Shmem: 3248768 kB' 'KernelStack: 13064 kB' 'PageTables: 5592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 581700 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.210 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:27.211 node0=1024 expecting 1024 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.211 17:52:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:30.503 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.503 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.503 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43226776 kB' 'MemAvailable: 45110788 kB' 'Buffers: 4384 kB' 'Cached: 10813096 kB' 'SwapCached: 0 kB' 'Active: 9113400 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612188 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517440 kB' 'Mapped: 213016 kB' 'Shmem: 8536652 kB' 'KReclaimable: 272796 kB' 'Slab: 1244752 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971956 kB' 'KernelStack: 21872 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10289908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.504 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43227944 kB' 'MemAvailable: 45111956 kB' 'Buffers: 4384 kB' 'Cached: 10813100 kB' 'SwapCached: 0 kB' 'Active: 9113644 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612432 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517148 kB' 'Mapped: 212976 kB' 'Shmem: 8536656 kB' 'KReclaimable: 272796 kB' 'Slab: 1244744 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971948 kB' 'KernelStack: 21856 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10289928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217024 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.506 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43227280 kB' 'MemAvailable: 45111292 kB' 'Buffers: 4384 kB' 'Cached: 10813116 kB' 'SwapCached: 0 kB' 'Active: 9114888 kB' 'Inactive: 2217824 kB' 'Active(anon): 8613676 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518412 kB' 'Mapped: 212976 kB' 'Shmem: 8536672 kB' 'KReclaimable: 272796 kB' 'Slab: 1244736 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971940 kB' 'KernelStack: 21872 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10304712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216976 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.507 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:30.771 nr_hugepages=1024 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:30.771 resv_hugepages=0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:30.771 surplus_hugepages=0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:30.771 anon_hugepages=0 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43230132 kB' 'MemAvailable: 45114144 kB' 'Buffers: 4384 kB' 'Cached: 10813140 kB' 'SwapCached: 0 kB' 'Active: 9113444 kB' 'Inactive: 2217824 kB' 'Active(anon): 8612232 kB' 'Inactive(anon): 438208 kB' 'Active(file): 501212 kB' 'Inactive(file): 1779616 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388092 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516940 kB' 'Mapped: 212976 kB' 'Shmem: 8536696 kB' 'KReclaimable: 272796 kB' 'Slab: 1244728 kB' 'SReclaimable: 272796 kB' 'SUnreclaim: 971932 kB' 'KernelStack: 21808 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 10289732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216944 kB' 'VmallocChunk: 0 kB' 'Percpu: 90944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3226996 kB' 'DirectMap2M: 12187648 kB' 'DirectMap1G: 54525952 kB' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.772 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 24688672 kB' 'MemUsed: 7896696 kB' 'SwapCached: 0 kB' 'Active: 3870404 kB' 'Inactive: 268160 kB' 'Active(anon): 3583664 kB' 'Inactive(anon): 0 kB' 'Active(file): 286740 kB' 'Inactive(file): 268160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3803696 kB' 'Mapped: 99684 kB' 'AnonPages: 338040 kB' 'Shmem: 3248796 kB' 'KernelStack: 13048 kB' 'PageTables: 5588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120536 kB' 'Slab: 581892 kB' 'SReclaimable: 120536 kB' 'SUnreclaim: 461356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.773 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.774 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:30.775 node0=1024 expecting 1024 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.775 00:03:30.775 real 0m7.129s 00:03:30.775 user 0m2.633s 00:03:30.775 sys 0m4.601s 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.775 17:52:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.775 ************************************ 00:03:30.775 END TEST no_shrink_alloc 00:03:30.775 ************************************ 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:30.775 17:52:52 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:30.775 00:03:30.775 real 0m22.942s 00:03:30.775 user 0m7.826s 00:03:30.775 sys 0m13.744s 00:03:30.775 17:52:52 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.775 17:52:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.775 ************************************ 00:03:30.775 END TEST hugepages 00:03:30.775 ************************************ 00:03:30.775 17:52:52 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:30.775 17:52:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.775 17:52:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.775 17:52:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.775 ************************************ 00:03:30.775 START TEST driver 00:03:30.775 ************************************ 00:03:30.775 17:52:52 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:31.034 * Looking for test storage... 00:03:31.034 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.034 17:52:52 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.034 --rc genhtml_branch_coverage=1 00:03:31.034 --rc genhtml_function_coverage=1 00:03:31.034 --rc genhtml_legend=1 00:03:31.034 --rc geninfo_all_blocks=1 00:03:31.034 --rc geninfo_unexecuted_blocks=1 00:03:31.034 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.034 ' 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.034 --rc genhtml_branch_coverage=1 00:03:31.034 --rc genhtml_function_coverage=1 00:03:31.034 --rc genhtml_legend=1 00:03:31.034 --rc geninfo_all_blocks=1 00:03:31.034 --rc geninfo_unexecuted_blocks=1 00:03:31.034 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.034 ' 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.034 --rc genhtml_branch_coverage=1 00:03:31.034 --rc genhtml_function_coverage=1 00:03:31.034 --rc genhtml_legend=1 00:03:31.034 --rc geninfo_all_blocks=1 00:03:31.034 --rc geninfo_unexecuted_blocks=1 00:03:31.034 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.034 ' 00:03:31.034 17:52:52 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:31.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.034 --rc genhtml_branch_coverage=1 00:03:31.034 --rc genhtml_function_coverage=1 00:03:31.034 --rc genhtml_legend=1 00:03:31.034 --rc geninfo_all_blocks=1 00:03:31.034 --rc geninfo_unexecuted_blocks=1 00:03:31.034 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:31.034 ' 00:03:31.035 17:52:52 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.035 17:52:52 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.035 17:52:52 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.305 17:52:57 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.305 17:52:57 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.305 17:52:57 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.305 17:52:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.305 ************************************ 00:03:36.305 START TEST guess_driver 00:03:36.305 ************************************ 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.305 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.305 Looking for driver=vfio-pci 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.305 17:52:57 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.590 17:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.968 17:53:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.153 00:03:45.153 real 0m9.247s 00:03:45.153 user 0m2.293s 00:03:45.153 sys 0m4.671s 00:03:45.153 17:53:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.153 17:53:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.153 ************************************ 00:03:45.153 END TEST guess_driver 00:03:45.153 ************************************ 00:03:45.153 00:03:45.153 real 0m14.275s 00:03:45.153 user 0m3.735s 00:03:45.153 sys 0m7.466s 00:03:45.153 17:53:06 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.153 17:53:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.154 ************************************ 00:03:45.154 END TEST driver 00:03:45.154 ************************************ 00:03:45.154 17:53:06 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:45.154 17:53:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.154 17:53:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.154 17:53:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.154 ************************************ 00:03:45.154 START TEST devices 00:03:45.154 ************************************ 00:03:45.154 17:53:06 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:45.413 * Looking for test storage... 00:03:45.413 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.413 17:53:06 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.413 17:53:06 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:45.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.414 --rc genhtml_branch_coverage=1 00:03:45.414 --rc genhtml_function_coverage=1 00:03:45.414 --rc genhtml_legend=1 00:03:45.414 --rc geninfo_all_blocks=1 00:03:45.414 --rc geninfo_unexecuted_blocks=1 00:03:45.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.414 ' 00:03:45.414 17:53:06 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:45.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.414 --rc genhtml_branch_coverage=1 00:03:45.414 --rc genhtml_function_coverage=1 00:03:45.414 --rc genhtml_legend=1 00:03:45.414 --rc geninfo_all_blocks=1 00:03:45.414 --rc geninfo_unexecuted_blocks=1 00:03:45.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.414 ' 00:03:45.414 17:53:06 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:45.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.414 --rc genhtml_branch_coverage=1 00:03:45.414 --rc genhtml_function_coverage=1 00:03:45.414 --rc genhtml_legend=1 00:03:45.414 --rc geninfo_all_blocks=1 00:03:45.414 --rc geninfo_unexecuted_blocks=1 00:03:45.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.414 ' 00:03:45.414 17:53:06 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:45.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.414 --rc genhtml_branch_coverage=1 00:03:45.414 --rc genhtml_function_coverage=1 00:03:45.414 --rc genhtml_legend=1 00:03:45.414 --rc geninfo_all_blocks=1 00:03:45.414 --rc geninfo_unexecuted_blocks=1 00:03:45.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:45.414 ' 00:03:45.414 17:53:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:45.414 17:53:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:45.414 17:53:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.414 17:53:06 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:03:49.602 17:53:10 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.602 No valid GPT data, bailing 00:03:49.602 17:53:10 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:03:49.602 17:53:10 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.602 17:53:10 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.602 17:53:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.602 ************************************ 00:03:49.602 START TEST nvme_mount 00:03:49.602 ************************************ 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.602 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.603 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.603 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.603 17:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.171 Creating new GPT entries in memory. 00:03:50.171 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.171 other utilities. 00:03:50.171 17:53:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.171 17:53:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.171 17:53:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.171 17:53:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.171 17:53:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.108 Creating new GPT entries in memory. 00:03:51.108 The operation has completed successfully. 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1446163 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.108 17:53:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.391 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.392 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.392 17:53:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.650 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.650 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.650 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.650 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.650 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:54.650 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:54.650 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.650 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.651 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.908 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.909 17:53:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.438 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.439 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.696 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.696 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.696 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.696 17:53:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.696 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.954 17:53:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.482 17:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.741 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.741 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:00.741 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.741 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.999 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.000 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.000 17:53:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.000 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.000 00:04:01.000 real 0m11.863s 00:04:01.000 user 0m3.201s 00:04:01.000 sys 0m6.455s 00:04:01.000 17:53:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.000 17:53:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.000 ************************************ 00:04:01.000 END TEST nvme_mount 00:04:01.000 ************************************ 00:04:01.000 17:53:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.000 17:53:22 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.000 17:53:22 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.000 17:53:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.000 ************************************ 00:04:01.000 START TEST dm_mount 00:04:01.000 ************************************ 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.000 17:53:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:01.934 Creating new GPT entries in memory. 00:04:01.934 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:01.934 other utilities. 00:04:01.934 17:53:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:01.934 17:53:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.934 17:53:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.934 17:53:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.934 17:53:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.312 Creating new GPT entries in memory. 00:04:03.312 The operation has completed successfully. 00:04:03.312 17:53:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.312 17:53:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.312 17:53:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.312 17:53:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.312 17:53:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:03.949 The operation has completed successfully. 00:04:03.949 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.949 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.949 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1450414 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.208 17:53:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.490 17:53:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.022 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.023 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.023 00:04:10.023 real 0m9.122s 00:04:10.023 user 0m1.927s 00:04:10.023 sys 0m4.085s 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.023 17:53:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.023 ************************************ 00:04:10.023 END TEST dm_mount 00:04:10.023 ************************************ 00:04:10.023 17:53:31 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.023 17:53:31 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.023 17:53:31 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.023 17:53:31 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.023 17:53:31 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.281 17:53:31 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.281 17:53:31 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.541 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.541 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.541 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.541 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.541 17:53:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.541 00:04:10.541 real 0m25.234s 00:04:10.541 user 0m6.530s 00:04:10.541 sys 0m13.250s 00:04:10.541 17:53:31 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.541 17:53:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.541 ************************************ 00:04:10.541 END TEST devices 00:04:10.541 ************************************ 00:04:10.541 00:04:10.541 real 1m27.539s 00:04:10.541 user 0m26.125s 00:04:10.541 sys 0m49.710s 00:04:10.541 17:53:31 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.541 17:53:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.541 ************************************ 00:04:10.541 END TEST setup.sh 00:04:10.541 ************************************ 00:04:10.541 17:53:31 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:13.830 Hugepages 00:04:13.830 node hugesize free / total 00:04:13.830 node0 1048576kB 0 / 0 00:04:13.830 node0 2048kB 1024 / 1024 00:04:13.830 node1 1048576kB 0 / 0 00:04:13.830 node1 2048kB 1024 / 1024 00:04:13.830 00:04:13.830 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.830 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.830 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.830 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:13.830 17:53:34 -- spdk/autotest.sh@117 -- # uname -s 00:04:13.830 17:53:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:13.830 17:53:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:13.830 17:53:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.121 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.121 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.494 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.751 17:53:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:19.693 17:53:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:19.693 17:53:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:19.693 17:53:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.693 17:53:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:19.693 17:53:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.693 17:53:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.693 17:53:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.693 17:53:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:19.693 17:53:41 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.693 17:53:41 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:19.693 17:53:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:19.693 17:53:41 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.993 Waiting for block devices as requested 00:04:22.993 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.252 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.252 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.252 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.512 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.512 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.512 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.512 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.772 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.772 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.772 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.031 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.031 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.031 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.031 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.291 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.291 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.551 17:53:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:24.551 17:53:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:04:24.551 17:53:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:24.551 17:53:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:24.551 17:53:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:24.551 17:53:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:24.551 17:53:45 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:24.551 17:53:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:24.551 17:53:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:24.551 17:53:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:24.551 17:53:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:24.551 17:53:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:24.551 17:53:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:24.551 17:53:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:24.551 17:53:45 -- common/autotest_common.sh@1541 -- # continue 00:04:24.551 17:53:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:24.551 17:53:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.551 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:24.551 17:53:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:24.551 17:53:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.551 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:24.551 17:53:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:27.848 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.848 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.107 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.107 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.107 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.107 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.489 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.749 17:53:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:29.749 17:53:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.749 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:04:29.749 17:53:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:29.749 17:53:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:29.749 17:53:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.749 17:53:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:29.749 17:53:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:29.749 17:53:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:29.749 17:53:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:29.749 17:53:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:29.749 17:53:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:29.749 17:53:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:29.749 17:53:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.749 17:53:51 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.749 17:53:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:29.749 17:53:51 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:29.749 17:53:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:29.749 17:53:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:29.749 17:53:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:30.009 17:53:51 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:30.009 17:53:51 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.009 17:53:51 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:30.009 17:53:51 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:30.009 17:53:51 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:30.009 17:53:51 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:30.009 17:53:51 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1460188 00:04:30.009 17:53:51 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.009 17:53:51 -- common/autotest_common.sh@1583 -- # waitforlisten 1460188 00:04:30.009 17:53:51 -- common/autotest_common.sh@831 -- # '[' -z 1460188 ']' 00:04:30.009 17:53:51 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.009 17:53:51 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.009 17:53:51 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.009 17:53:51 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.009 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:04:30.009 [2024-10-05 17:53:51.246366] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:30.009 [2024-10-05 17:53:51.246443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460188 ] 00:04:30.009 [2024-10-05 17:53:51.314924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.009 [2024-10-05 17:53:51.390013] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.269 17:53:51 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.269 17:53:51 -- common/autotest_common.sh@864 -- # return 0 00:04:30.269 17:53:51 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:30.269 17:53:51 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:30.269 17:53:51 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:33.560 nvme0n1 00:04:33.560 17:53:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.560 [2024-10-05 17:53:54.785235] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:33.560 request: 00:04:33.560 { 00:04:33.560 "nvme_ctrlr_name": "nvme0", 00:04:33.560 "password": "test", 00:04:33.560 "method": "bdev_nvme_opal_revert", 00:04:33.560 "req_id": 1 00:04:33.560 } 00:04:33.560 Got JSON-RPC error response 00:04:33.560 response: 00:04:33.560 { 00:04:33.560 "code": -32602, 00:04:33.560 "message": "Invalid parameters" 00:04:33.560 } 00:04:33.560 17:53:54 -- common/autotest_common.sh@1589 -- # true 00:04:33.560 17:53:54 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:33.560 17:53:54 -- common/autotest_common.sh@1593 -- # killprocess 1460188 00:04:33.560 17:53:54 -- common/autotest_common.sh@950 -- # '[' -z 1460188 ']' 00:04:33.560 17:53:54 -- common/autotest_common.sh@954 -- # kill -0 1460188 00:04:33.560 17:53:54 -- common/autotest_common.sh@955 -- # uname 00:04:33.560 17:53:54 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.560 17:53:54 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460188 00:04:33.560 17:53:54 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.560 17:53:54 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.560 17:53:54 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460188' 00:04:33.560 killing process with pid 1460188 00:04:33.560 17:53:54 -- common/autotest_common.sh@969 -- # kill 1460188 00:04:33.560 17:53:54 -- common/autotest_common.sh@974 -- # wait 1460188 00:04:36.095 17:53:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:36.095 17:53:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:36.095 17:53:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.095 17:53:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.095 17:53:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:36.095 17:53:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.095 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:04:36.095 17:53:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:36.095 17:53:57 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:36.095 17:53:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.095 17:53:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.095 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:04:36.095 ************************************ 00:04:36.095 START TEST env 00:04:36.095 ************************************ 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:36.095 * Looking for test storage... 00:04:36.095 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.095 17:53:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.095 17:53:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.095 17:53:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.095 17:53:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.095 17:53:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.095 17:53:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.095 17:53:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.095 17:53:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.095 17:53:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.095 17:53:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.095 17:53:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.095 17:53:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:36.095 17:53:57 env -- scripts/common.sh@345 -- # : 1 00:04:36.095 17:53:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.095 17:53:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.095 17:53:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:36.095 17:53:57 env -- scripts/common.sh@353 -- # local d=1 00:04:36.095 17:53:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.095 17:53:57 env -- scripts/common.sh@355 -- # echo 1 00:04:36.095 17:53:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.095 17:53:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:36.095 17:53:57 env -- scripts/common.sh@353 -- # local d=2 00:04:36.095 17:53:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.095 17:53:57 env -- scripts/common.sh@355 -- # echo 2 00:04:36.095 17:53:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.095 17:53:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.095 17:53:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.095 17:53:57 env -- scripts/common.sh@368 -- # return 0 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.095 --rc genhtml_branch_coverage=1 00:04:36.095 --rc genhtml_function_coverage=1 00:04:36.095 --rc genhtml_legend=1 00:04:36.095 --rc geninfo_all_blocks=1 00:04:36.095 --rc geninfo_unexecuted_blocks=1 00:04:36.095 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.095 ' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.095 --rc genhtml_branch_coverage=1 00:04:36.095 --rc genhtml_function_coverage=1 00:04:36.095 --rc genhtml_legend=1 00:04:36.095 --rc geninfo_all_blocks=1 00:04:36.095 --rc geninfo_unexecuted_blocks=1 00:04:36.095 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.095 ' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.095 --rc genhtml_branch_coverage=1 00:04:36.095 --rc genhtml_function_coverage=1 00:04:36.095 --rc genhtml_legend=1 00:04:36.095 --rc geninfo_all_blocks=1 00:04:36.095 --rc geninfo_unexecuted_blocks=1 00:04:36.095 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.095 ' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.095 --rc genhtml_branch_coverage=1 00:04:36.095 --rc genhtml_function_coverage=1 00:04:36.095 --rc genhtml_legend=1 00:04:36.095 --rc geninfo_all_blocks=1 00:04:36.095 --rc geninfo_unexecuted_blocks=1 00:04:36.095 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.095 ' 00:04:36.095 17:53:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.095 17:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.095 ************************************ 00:04:36.095 START TEST env_memory 00:04:36.095 ************************************ 00:04:36.095 17:53:57 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.095 00:04:36.095 00:04:36.095 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.095 http://cunit.sourceforge.net/ 00:04:36.095 00:04:36.095 00:04:36.095 Suite: memory 00:04:36.095 Test: alloc and free memory map ...[2024-10-05 17:53:57.392828] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.095 passed 00:04:36.095 Test: mem map translation ...[2024-10-05 17:53:57.405502] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.095 [2024-10-05 17:53:57.405518] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.095 [2024-10-05 17:53:57.405553] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.095 [2024-10-05 17:53:57.405562] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.095 passed 00:04:36.095 Test: mem map registration ...[2024-10-05 17:53:57.425348] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:36.095 [2024-10-05 17:53:57.425365] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:36.095 passed 00:04:36.095 Test: mem map adjacent registrations ...passed 00:04:36.095 00:04:36.095 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.095 suites 1 1 n/a 0 0 00:04:36.095 tests 4 4 4 0 0 00:04:36.095 asserts 152 152 152 0 n/a 00:04:36.095 00:04:36.095 Elapsed time = 0.082 seconds 00:04:36.095 00:04:36.095 real 0m0.095s 00:04:36.095 user 0m0.082s 00:04:36.095 sys 0m0.013s 00:04:36.095 17:53:57 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.095 17:53:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.095 ************************************ 00:04:36.095 END TEST env_memory 00:04:36.095 ************************************ 00:04:36.095 17:53:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.095 17:53:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.095 17:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.095 ************************************ 00:04:36.095 START TEST env_vtophys 00:04:36.095 ************************************ 00:04:36.095 17:53:57 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.095 EAL: lib.eal log level changed from notice to debug 00:04:36.095 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.095 EAL: Detected lcore 1 as core 1 on socket 0 00:04:36.095 EAL: Detected lcore 2 as core 2 on socket 0 00:04:36.095 EAL: Detected lcore 3 as core 3 on socket 0 00:04:36.095 EAL: Detected lcore 4 as core 4 on socket 0 00:04:36.095 EAL: Detected lcore 5 as core 5 on socket 0 00:04:36.095 EAL: Detected lcore 6 as core 6 on socket 0 00:04:36.095 EAL: Detected lcore 7 as core 8 on socket 0 00:04:36.095 EAL: Detected lcore 8 as core 9 on socket 0 00:04:36.095 EAL: Detected lcore 9 as core 10 on socket 0 00:04:36.095 EAL: Detected lcore 10 as core 11 on socket 0 00:04:36.095 EAL: Detected lcore 11 as core 12 on socket 0 00:04:36.095 EAL: Detected lcore 12 as core 13 on socket 0 00:04:36.095 EAL: Detected lcore 13 as core 14 on socket 0 00:04:36.096 EAL: Detected lcore 14 as core 16 on socket 0 00:04:36.096 EAL: Detected lcore 15 as core 17 on socket 0 00:04:36.096 EAL: Detected lcore 16 as core 18 on socket 0 00:04:36.096 EAL: Detected lcore 17 as core 19 on socket 0 00:04:36.096 EAL: Detected lcore 18 as core 20 on socket 0 00:04:36.096 EAL: Detected lcore 19 as core 21 on socket 0 00:04:36.096 EAL: Detected lcore 20 as core 22 on socket 0 00:04:36.096 EAL: Detected lcore 21 as core 24 on socket 0 00:04:36.096 EAL: Detected lcore 22 as core 25 on socket 0 00:04:36.096 EAL: Detected lcore 23 as core 26 on socket 0 00:04:36.096 EAL: Detected lcore 24 as core 27 on socket 0 00:04:36.096 EAL: Detected lcore 25 as core 28 on socket 0 00:04:36.096 EAL: Detected lcore 26 as core 29 on socket 0 00:04:36.096 EAL: Detected lcore 27 as core 30 on socket 0 00:04:36.096 EAL: Detected lcore 28 as core 0 on socket 1 00:04:36.096 EAL: Detected lcore 29 as core 1 on socket 1 00:04:36.096 EAL: Detected lcore 30 as core 2 on socket 1 00:04:36.096 EAL: Detected lcore 31 as core 3 on socket 1 00:04:36.096 EAL: Detected lcore 32 as core 4 on socket 1 00:04:36.096 EAL: Detected lcore 33 as core 5 on socket 1 00:04:36.096 EAL: Detected lcore 34 as core 6 on socket 1 00:04:36.096 EAL: Detected lcore 35 as core 8 on socket 1 00:04:36.096 EAL: Detected lcore 36 as core 9 on socket 1 00:04:36.096 EAL: Detected lcore 37 as core 10 on socket 1 00:04:36.096 EAL: Detected lcore 38 as core 11 on socket 1 00:04:36.096 EAL: Detected lcore 39 as core 12 on socket 1 00:04:36.096 EAL: Detected lcore 40 as core 13 on socket 1 00:04:36.096 EAL: Detected lcore 41 as core 14 on socket 1 00:04:36.096 EAL: Detected lcore 42 as core 16 on socket 1 00:04:36.096 EAL: Detected lcore 43 as core 17 on socket 1 00:04:36.096 EAL: Detected lcore 44 as core 18 on socket 1 00:04:36.096 EAL: Detected lcore 45 as core 19 on socket 1 00:04:36.096 EAL: Detected lcore 46 as core 20 on socket 1 00:04:36.096 EAL: Detected lcore 47 as core 21 on socket 1 00:04:36.096 EAL: Detected lcore 48 as core 22 on socket 1 00:04:36.096 EAL: Detected lcore 49 as core 24 on socket 1 00:04:36.096 EAL: Detected lcore 50 as core 25 on socket 1 00:04:36.096 EAL: Detected lcore 51 as core 26 on socket 1 00:04:36.096 EAL: Detected lcore 52 as core 27 on socket 1 00:04:36.096 EAL: Detected lcore 53 as core 28 on socket 1 00:04:36.096 EAL: Detected lcore 54 as core 29 on socket 1 00:04:36.096 EAL: Detected lcore 55 as core 30 on socket 1 00:04:36.096 EAL: Detected lcore 56 as core 0 on socket 0 00:04:36.096 EAL: Detected lcore 57 as core 1 on socket 0 00:04:36.096 EAL: Detected lcore 58 as core 2 on socket 0 00:04:36.096 EAL: Detected lcore 59 as core 3 on socket 0 00:04:36.096 EAL: Detected lcore 60 as core 4 on socket 0 00:04:36.096 EAL: Detected lcore 61 as core 5 on socket 0 00:04:36.096 EAL: Detected lcore 62 as core 6 on socket 0 00:04:36.096 EAL: Detected lcore 63 as core 8 on socket 0 00:04:36.096 EAL: Detected lcore 64 as core 9 on socket 0 00:04:36.096 EAL: Detected lcore 65 as core 10 on socket 0 00:04:36.096 EAL: Detected lcore 66 as core 11 on socket 0 00:04:36.096 EAL: Detected lcore 67 as core 12 on socket 0 00:04:36.096 EAL: Detected lcore 68 as core 13 on socket 0 00:04:36.096 EAL: Detected lcore 69 as core 14 on socket 0 00:04:36.096 EAL: Detected lcore 70 as core 16 on socket 0 00:04:36.096 EAL: Detected lcore 71 as core 17 on socket 0 00:04:36.096 EAL: Detected lcore 72 as core 18 on socket 0 00:04:36.096 EAL: Detected lcore 73 as core 19 on socket 0 00:04:36.096 EAL: Detected lcore 74 as core 20 on socket 0 00:04:36.096 EAL: Detected lcore 75 as core 21 on socket 0 00:04:36.096 EAL: Detected lcore 76 as core 22 on socket 0 00:04:36.096 EAL: Detected lcore 77 as core 24 on socket 0 00:04:36.096 EAL: Detected lcore 78 as core 25 on socket 0 00:04:36.096 EAL: Detected lcore 79 as core 26 on socket 0 00:04:36.096 EAL: Detected lcore 80 as core 27 on socket 0 00:04:36.096 EAL: Detected lcore 81 as core 28 on socket 0 00:04:36.096 EAL: Detected lcore 82 as core 29 on socket 0 00:04:36.096 EAL: Detected lcore 83 as core 30 on socket 0 00:04:36.354 EAL: Detected lcore 84 as core 0 on socket 1 00:04:36.354 EAL: Detected lcore 85 as core 1 on socket 1 00:04:36.354 EAL: Detected lcore 86 as core 2 on socket 1 00:04:36.354 EAL: Detected lcore 87 as core 3 on socket 1 00:04:36.354 EAL: Detected lcore 88 as core 4 on socket 1 00:04:36.354 EAL: Detected lcore 89 as core 5 on socket 1 00:04:36.354 EAL: Detected lcore 90 as core 6 on socket 1 00:04:36.354 EAL: Detected lcore 91 as core 8 on socket 1 00:04:36.354 EAL: Detected lcore 92 as core 9 on socket 1 00:04:36.354 EAL: Detected lcore 93 as core 10 on socket 1 00:04:36.354 EAL: Detected lcore 94 as core 11 on socket 1 00:04:36.354 EAL: Detected lcore 95 as core 12 on socket 1 00:04:36.354 EAL: Detected lcore 96 as core 13 on socket 1 00:04:36.354 EAL: Detected lcore 97 as core 14 on socket 1 00:04:36.354 EAL: Detected lcore 98 as core 16 on socket 1 00:04:36.354 EAL: Detected lcore 99 as core 17 on socket 1 00:04:36.354 EAL: Detected lcore 100 as core 18 on socket 1 00:04:36.354 EAL: Detected lcore 101 as core 19 on socket 1 00:04:36.354 EAL: Detected lcore 102 as core 20 on socket 1 00:04:36.354 EAL: Detected lcore 103 as core 21 on socket 1 00:04:36.354 EAL: Detected lcore 104 as core 22 on socket 1 00:04:36.354 EAL: Detected lcore 105 as core 24 on socket 1 00:04:36.354 EAL: Detected lcore 106 as core 25 on socket 1 00:04:36.354 EAL: Detected lcore 107 as core 26 on socket 1 00:04:36.354 EAL: Detected lcore 108 as core 27 on socket 1 00:04:36.354 EAL: Detected lcore 109 as core 28 on socket 1 00:04:36.354 EAL: Detected lcore 110 as core 29 on socket 1 00:04:36.354 EAL: Detected lcore 111 as core 30 on socket 1 00:04:36.354 EAL: Maximum logical cores by configuration: 128 00:04:36.354 EAL: Detected CPU lcores: 112 00:04:36.354 EAL: Detected NUMA nodes: 2 00:04:36.354 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.354 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:36.354 EAL: Checking presence of .so 'librte_eal.so' 00:04:36.354 EAL: Detected static linkage of DPDK 00:04:36.354 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.354 EAL: Bus pci wants IOVA as 'DC' 00:04:36.354 EAL: Buses did not request a specific IOVA mode. 00:04:36.354 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:36.354 EAL: Selected IOVA mode 'VA' 00:04:36.354 EAL: Probing VFIO support... 00:04:36.354 EAL: IOMMU type 1 (Type 1) is supported 00:04:36.354 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:36.354 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:36.354 EAL: VFIO support initialized 00:04:36.354 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.354 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.354 EAL: Setting up physically contiguous memory... 00:04:36.354 EAL: Setting maximum number of open files to 524288 00:04:36.354 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.354 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:36.354 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.354 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.354 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.354 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.354 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.354 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.354 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.354 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.355 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:36.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.355 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:36.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:36.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.355 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:36.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:36.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.355 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:36.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:36.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.355 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:36.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.355 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:36.355 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:36.355 EAL: Hugepages will be freed exactly as allocated. 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: TSC frequency is ~2500000 KHz 00:04:36.355 EAL: Main lcore 0 is ready (tid=7f8ed3873a00;cpuset=[0]) 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 0 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.355 00:04:36.355 00:04:36.355 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.355 http://cunit.sourceforge.net/ 00:04:36.355 00:04:36.355 00:04:36.355 Suite: components_suite 00:04:36.355 Test: vtophys_malloc_test ...passed 00:04:36.355 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.355 EAL: Trying to obtain current memory policy. 00:04:36.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.355 EAL: Restoring previous memory policy: 4 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.355 EAL: request: mp_malloc_sync 00:04:36.355 EAL: No shared files mode enabled, IPC is disabled 00:04:36.355 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.643 EAL: request: mp_malloc_sync 00:04:36.643 EAL: No shared files mode enabled, IPC is disabled 00:04:36.643 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.643 EAL: Trying to obtain current memory policy. 00:04:36.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.643 EAL: Restoring previous memory policy: 4 00:04:36.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.643 EAL: request: mp_malloc_sync 00:04:36.643 EAL: No shared files mode enabled, IPC is disabled 00:04:36.643 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.643 EAL: request: mp_malloc_sync 00:04:36.643 EAL: No shared files mode enabled, IPC is disabled 00:04:36.643 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.643 EAL: Trying to obtain current memory policy. 00:04:36.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.901 EAL: Restoring previous memory policy: 4 00:04:36.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.901 EAL: request: mp_malloc_sync 00:04:36.901 EAL: No shared files mode enabled, IPC is disabled 00:04:36.901 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.159 EAL: request: mp_malloc_sync 00:04:37.159 EAL: No shared files mode enabled, IPC is disabled 00:04:37.159 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.159 passed 00:04:37.159 00:04:37.159 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.159 suites 1 1 n/a 0 0 00:04:37.159 tests 2 2 2 0 0 00:04:37.159 asserts 497 497 497 0 n/a 00:04:37.159 00:04:37.159 Elapsed time = 0.959 seconds 00:04:37.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.159 EAL: request: mp_malloc_sync 00:04:37.159 EAL: No shared files mode enabled, IPC is disabled 00:04:37.159 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.159 EAL: No shared files mode enabled, IPC is disabled 00:04:37.159 EAL: No shared files mode enabled, IPC is disabled 00:04:37.159 EAL: No shared files mode enabled, IPC is disabled 00:04:37.159 00:04:37.159 real 0m1.075s 00:04:37.159 user 0m0.626s 00:04:37.159 sys 0m0.428s 00:04:37.159 17:53:58 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.159 17:53:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.159 ************************************ 00:04:37.159 END TEST env_vtophys 00:04:37.159 ************************************ 00:04:37.417 17:53:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.417 17:53:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.417 17:53:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.417 17:53:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.417 ************************************ 00:04:37.417 START TEST env_pci 00:04:37.417 ************************************ 00:04:37.417 17:53:58 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.417 00:04:37.417 00:04:37.417 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.417 http://cunit.sourceforge.net/ 00:04:37.417 00:04:37.417 00:04:37.417 Suite: pci 00:04:37.417 Test: pci_hook ...[2024-10-05 17:53:58.709584] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1461494 has claimed it 00:04:37.417 EAL: Cannot find device (10000:00:01.0) 00:04:37.417 EAL: Failed to attach device on primary process 00:04:37.417 passed 00:04:37.417 00:04:37.417 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.417 suites 1 1 n/a 0 0 00:04:37.417 tests 1 1 1 0 0 00:04:37.417 asserts 25 25 25 0 n/a 00:04:37.417 00:04:37.417 Elapsed time = 0.035 seconds 00:04:37.417 00:04:37.417 real 0m0.056s 00:04:37.417 user 0m0.014s 00:04:37.417 sys 0m0.042s 00:04:37.417 17:53:58 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.417 17:53:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.417 ************************************ 00:04:37.417 END TEST env_pci 00:04:37.417 ************************************ 00:04:37.417 17:53:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.417 17:53:58 env -- env/env.sh@15 -- # uname 00:04:37.417 17:53:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.417 17:53:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.417 17:53:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.417 17:53:58 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:37.417 17:53:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.417 17:53:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.417 ************************************ 00:04:37.417 START TEST env_dpdk_post_init 00:04:37.417 ************************************ 00:04:37.417 17:53:58 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.417 EAL: Detected CPU lcores: 112 00:04:37.417 EAL: Detected NUMA nodes: 2 00:04:37.417 EAL: Detected static linkage of DPDK 00:04:37.417 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.676 EAL: Selected IOVA mode 'VA' 00:04:37.676 EAL: VFIO support initialized 00:04:37.676 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.676 EAL: Using IOMMU type 1 (Type 1) 00:04:38.243 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:42.593 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:42.593 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:42.593 Starting DPDK initialization... 00:04:42.593 Starting SPDK post initialization... 00:04:42.593 SPDK NVMe probe 00:04:42.593 Attaching to 0000:d8:00.0 00:04:42.593 Attached to 0000:d8:00.0 00:04:42.593 Cleaning up... 00:04:42.593 00:04:42.593 real 0m4.692s 00:04:42.593 user 0m3.235s 00:04:42.593 sys 0m0.697s 00:04:42.593 17:54:03 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.593 17:54:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.593 ************************************ 00:04:42.593 END TEST env_dpdk_post_init 00:04:42.593 ************************************ 00:04:42.593 17:54:03 env -- env/env.sh@26 -- # uname 00:04:42.593 17:54:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.593 17:54:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.593 17:54:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.593 17:54:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.593 17:54:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.593 ************************************ 00:04:42.593 START TEST env_mem_callbacks 00:04:42.593 ************************************ 00:04:42.593 17:54:03 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.593 EAL: Detected CPU lcores: 112 00:04:42.593 EAL: Detected NUMA nodes: 2 00:04:42.593 EAL: Detected static linkage of DPDK 00:04:42.593 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.593 EAL: Selected IOVA mode 'VA' 00:04:42.593 EAL: VFIO support initialized 00:04:42.593 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.593 00:04:42.593 00:04:42.593 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.593 http://cunit.sourceforge.net/ 00:04:42.593 00:04:42.593 00:04:42.593 Suite: memory 00:04:42.593 Test: test ... 00:04:42.593 register 0x200000200000 2097152 00:04:42.593 malloc 3145728 00:04:42.593 register 0x200000400000 4194304 00:04:42.593 buf 0x200000500000 len 3145728 PASSED 00:04:42.593 malloc 64 00:04:42.593 buf 0x2000004fff40 len 64 PASSED 00:04:42.593 malloc 4194304 00:04:42.593 register 0x200000800000 6291456 00:04:42.593 buf 0x200000a00000 len 4194304 PASSED 00:04:42.593 free 0x200000500000 3145728 00:04:42.593 free 0x2000004fff40 64 00:04:42.593 unregister 0x200000400000 4194304 PASSED 00:04:42.593 free 0x200000a00000 4194304 00:04:42.593 unregister 0x200000800000 6291456 PASSED 00:04:42.593 malloc 8388608 00:04:42.593 register 0x200000400000 10485760 00:04:42.593 buf 0x200000600000 len 8388608 PASSED 00:04:42.593 free 0x200000600000 8388608 00:04:42.593 unregister 0x200000400000 10485760 PASSED 00:04:42.593 passed 00:04:42.593 00:04:42.593 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.593 suites 1 1 n/a 0 0 00:04:42.593 tests 1 1 1 0 0 00:04:42.593 asserts 15 15 15 0 n/a 00:04:42.593 00:04:42.593 Elapsed time = 0.005 seconds 00:04:42.593 00:04:42.593 real 0m0.065s 00:04:42.593 user 0m0.023s 00:04:42.593 sys 0m0.041s 00:04:42.593 17:54:03 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.593 17:54:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.593 ************************************ 00:04:42.593 END TEST env_mem_callbacks 00:04:42.593 ************************************ 00:04:42.593 00:04:42.593 real 0m6.593s 00:04:42.593 user 0m4.229s 00:04:42.593 sys 0m1.630s 00:04:42.593 17:54:03 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.593 17:54:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.593 ************************************ 00:04:42.593 END TEST env 00:04:42.593 ************************************ 00:04:42.593 17:54:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.593 17:54:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.593 17:54:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.593 17:54:03 -- common/autotest_common.sh@10 -- # set +x 00:04:42.593 ************************************ 00:04:42.593 START TEST rpc 00:04:42.593 ************************************ 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.593 * Looking for test storage... 00:04:42.593 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.593 17:54:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.593 17:54:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.593 17:54:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.593 17:54:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.593 17:54:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.593 17:54:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.593 17:54:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.593 17:54:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.593 17:54:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.593 17:54:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.593 17:54:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.593 17:54:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.593 17:54:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.593 17:54:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.593 17:54:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.593 --rc genhtml_branch_coverage=1 00:04:42.593 --rc genhtml_function_coverage=1 00:04:42.593 --rc genhtml_legend=1 00:04:42.593 --rc geninfo_all_blocks=1 00:04:42.593 --rc geninfo_unexecuted_blocks=1 00:04:42.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.593 ' 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.593 --rc genhtml_branch_coverage=1 00:04:42.593 --rc genhtml_function_coverage=1 00:04:42.593 --rc genhtml_legend=1 00:04:42.593 --rc geninfo_all_blocks=1 00:04:42.593 --rc geninfo_unexecuted_blocks=1 00:04:42.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.593 ' 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.593 --rc genhtml_branch_coverage=1 00:04:42.593 --rc genhtml_function_coverage=1 00:04:42.593 --rc genhtml_legend=1 00:04:42.593 --rc geninfo_all_blocks=1 00:04:42.593 --rc geninfo_unexecuted_blocks=1 00:04:42.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.593 ' 00:04:42.593 17:54:03 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.594 --rc genhtml_branch_coverage=1 00:04:42.594 --rc genhtml_function_coverage=1 00:04:42.594 --rc genhtml_legend=1 00:04:42.594 --rc geninfo_all_blocks=1 00:04:42.594 --rc geninfo_unexecuted_blocks=1 00:04:42.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.594 ' 00:04:42.594 17:54:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1462894 00:04:42.594 17:54:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.594 17:54:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.594 17:54:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1462894 00:04:42.594 17:54:03 rpc -- common/autotest_common.sh@831 -- # '[' -z 1462894 ']' 00:04:42.594 17:54:03 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.594 17:54:03 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.594 17:54:03 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.594 17:54:03 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.594 17:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.594 [2024-10-05 17:54:04.022456] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:42.594 [2024-10-05 17:54:04.022525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462894 ] 00:04:42.852 [2024-10-05 17:54:04.089759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.852 [2024-10-05 17:54:04.167709] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.852 [2024-10-05 17:54:04.167749] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1462894' to capture a snapshot of events at runtime. 00:04:42.852 [2024-10-05 17:54:04.167758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.852 [2024-10-05 17:54:04.167767] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.852 [2024-10-05 17:54:04.167773] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1462894 for offline analysis/debug. 00:04:42.853 [2024-10-05 17:54:04.168391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.420 17:54:04 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.420 17:54:04 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:43.420 17:54:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.420 17:54:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:43.420 17:54:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.420 17:54:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.420 17:54:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.420 17:54:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.420 17:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 ************************************ 00:04:43.680 START TEST rpc_integrity 00:04:43.680 ************************************ 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 17:54:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.680 { 00:04:43.680 "name": "Malloc0", 00:04:43.680 "aliases": [ 00:04:43.680 "bc8c709d-3aff-4e36-b219-eb87e03ab8f3" 00:04:43.680 ], 00:04:43.680 "product_name": "Malloc disk", 00:04:43.680 "block_size": 512, 00:04:43.680 "num_blocks": 16384, 00:04:43.680 "uuid": "bc8c709d-3aff-4e36-b219-eb87e03ab8f3", 00:04:43.680 "assigned_rate_limits": { 00:04:43.680 "rw_ios_per_sec": 0, 00:04:43.680 "rw_mbytes_per_sec": 0, 00:04:43.680 "r_mbytes_per_sec": 0, 00:04:43.680 "w_mbytes_per_sec": 0 00:04:43.680 }, 00:04:43.680 "claimed": false, 00:04:43.680 "zoned": false, 00:04:43.680 "supported_io_types": { 00:04:43.680 "read": true, 00:04:43.680 "write": true, 00:04:43.680 "unmap": true, 00:04:43.680 "flush": true, 00:04:43.680 "reset": true, 00:04:43.680 "nvme_admin": false, 00:04:43.680 "nvme_io": false, 00:04:43.680 "nvme_io_md": false, 00:04:43.680 "write_zeroes": true, 00:04:43.680 "zcopy": true, 00:04:43.680 "get_zone_info": false, 00:04:43.680 "zone_management": false, 00:04:43.680 "zone_append": false, 00:04:43.680 "compare": false, 00:04:43.680 "compare_and_write": false, 00:04:43.680 "abort": true, 00:04:43.680 "seek_hole": false, 00:04:43.680 "seek_data": false, 00:04:43.680 "copy": true, 00:04:43.680 "nvme_iov_md": false 00:04:43.680 }, 00:04:43.680 "memory_domains": [ 00:04:43.680 { 00:04:43.680 "dma_device_id": "system", 00:04:43.680 "dma_device_type": 1 00:04:43.680 }, 00:04:43.680 { 00:04:43.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.680 "dma_device_type": 2 00:04:43.680 } 00:04:43.680 ], 00:04:43.680 "driver_specific": {} 00:04:43.680 } 00:04:43.680 ]' 00:04:43.680 17:54:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.680 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.680 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 [2024-10-05 17:54:05.030049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.680 [2024-10-05 17:54:05.030083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.680 [2024-10-05 17:54:05.030100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e8e7b0 00:04:43.680 [2024-10-05 17:54:05.030108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.680 [2024-10-05 17:54:05.031002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.680 [2024-10-05 17:54:05.031026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.680 Passthru0 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.680 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.680 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.680 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.680 { 00:04:43.680 "name": "Malloc0", 00:04:43.680 "aliases": [ 00:04:43.680 "bc8c709d-3aff-4e36-b219-eb87e03ab8f3" 00:04:43.680 ], 00:04:43.680 "product_name": "Malloc disk", 00:04:43.680 "block_size": 512, 00:04:43.680 "num_blocks": 16384, 00:04:43.680 "uuid": "bc8c709d-3aff-4e36-b219-eb87e03ab8f3", 00:04:43.680 "assigned_rate_limits": { 00:04:43.680 "rw_ios_per_sec": 0, 00:04:43.680 "rw_mbytes_per_sec": 0, 00:04:43.680 "r_mbytes_per_sec": 0, 00:04:43.680 "w_mbytes_per_sec": 0 00:04:43.680 }, 00:04:43.680 "claimed": true, 00:04:43.680 "claim_type": "exclusive_write", 00:04:43.680 "zoned": false, 00:04:43.680 "supported_io_types": { 00:04:43.680 "read": true, 00:04:43.680 "write": true, 00:04:43.680 "unmap": true, 00:04:43.680 "flush": true, 00:04:43.680 "reset": true, 00:04:43.680 "nvme_admin": false, 00:04:43.680 "nvme_io": false, 00:04:43.680 "nvme_io_md": false, 00:04:43.680 "write_zeroes": true, 00:04:43.680 "zcopy": true, 00:04:43.680 "get_zone_info": false, 00:04:43.680 "zone_management": false, 00:04:43.680 "zone_append": false, 00:04:43.680 "compare": false, 00:04:43.681 "compare_and_write": false, 00:04:43.681 "abort": true, 00:04:43.681 "seek_hole": false, 00:04:43.681 "seek_data": false, 00:04:43.681 "copy": true, 00:04:43.681 "nvme_iov_md": false 00:04:43.681 }, 00:04:43.681 "memory_domains": [ 00:04:43.681 { 00:04:43.681 "dma_device_id": "system", 00:04:43.681 "dma_device_type": 1 00:04:43.681 }, 00:04:43.681 { 00:04:43.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.681 "dma_device_type": 2 00:04:43.681 } 00:04:43.681 ], 00:04:43.681 "driver_specific": {} 00:04:43.681 }, 00:04:43.681 { 00:04:43.681 "name": "Passthru0", 00:04:43.681 "aliases": [ 00:04:43.681 "35a45afd-3ba9-5daf-96bb-aedb78b7c1e4" 00:04:43.681 ], 00:04:43.681 "product_name": "passthru", 00:04:43.681 "block_size": 512, 00:04:43.681 "num_blocks": 16384, 00:04:43.681 "uuid": "35a45afd-3ba9-5daf-96bb-aedb78b7c1e4", 00:04:43.681 "assigned_rate_limits": { 00:04:43.681 "rw_ios_per_sec": 0, 00:04:43.681 "rw_mbytes_per_sec": 0, 00:04:43.681 "r_mbytes_per_sec": 0, 00:04:43.681 "w_mbytes_per_sec": 0 00:04:43.681 }, 00:04:43.681 "claimed": false, 00:04:43.681 "zoned": false, 00:04:43.681 "supported_io_types": { 00:04:43.681 "read": true, 00:04:43.681 "write": true, 00:04:43.681 "unmap": true, 00:04:43.681 "flush": true, 00:04:43.681 "reset": true, 00:04:43.681 "nvme_admin": false, 00:04:43.681 "nvme_io": false, 00:04:43.681 "nvme_io_md": false, 00:04:43.681 "write_zeroes": true, 00:04:43.681 "zcopy": true, 00:04:43.681 "get_zone_info": false, 00:04:43.681 "zone_management": false, 00:04:43.681 "zone_append": false, 00:04:43.681 "compare": false, 00:04:43.681 "compare_and_write": false, 00:04:43.681 "abort": true, 00:04:43.681 "seek_hole": false, 00:04:43.681 "seek_data": false, 00:04:43.681 "copy": true, 00:04:43.681 "nvme_iov_md": false 00:04:43.681 }, 00:04:43.681 "memory_domains": [ 00:04:43.681 { 00:04:43.681 "dma_device_id": "system", 00:04:43.681 "dma_device_type": 1 00:04:43.681 }, 00:04:43.681 { 00:04:43.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.681 "dma_device_type": 2 00:04:43.681 } 00:04:43.681 ], 00:04:43.681 "driver_specific": { 00:04:43.681 "passthru": { 00:04:43.681 "name": "Passthru0", 00:04:43.681 "base_bdev_name": "Malloc0" 00:04:43.681 } 00:04:43.681 } 00:04:43.681 } 00:04:43.681 ]' 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.681 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.681 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.940 17:54:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.940 00:04:43.940 real 0m0.271s 00:04:43.940 user 0m0.173s 00:04:43.940 sys 0m0.045s 00:04:43.940 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.940 17:54:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.940 ************************************ 00:04:43.940 END TEST rpc_integrity 00:04:43.940 ************************************ 00:04:43.940 17:54:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.940 17:54:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.940 17:54:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.940 17:54:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.940 ************************************ 00:04:43.940 START TEST rpc_plugins 00:04:43.940 ************************************ 00:04:43.940 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:43.940 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.940 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.940 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.940 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.940 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.940 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.941 { 00:04:43.941 "name": "Malloc1", 00:04:43.941 "aliases": [ 00:04:43.941 "68fbf5e8-3cda-475a-a06c-6d6852b243d5" 00:04:43.941 ], 00:04:43.941 "product_name": "Malloc disk", 00:04:43.941 "block_size": 4096, 00:04:43.941 "num_blocks": 256, 00:04:43.941 "uuid": "68fbf5e8-3cda-475a-a06c-6d6852b243d5", 00:04:43.941 "assigned_rate_limits": { 00:04:43.941 "rw_ios_per_sec": 0, 00:04:43.941 "rw_mbytes_per_sec": 0, 00:04:43.941 "r_mbytes_per_sec": 0, 00:04:43.941 "w_mbytes_per_sec": 0 00:04:43.941 }, 00:04:43.941 "claimed": false, 00:04:43.941 "zoned": false, 00:04:43.941 "supported_io_types": { 00:04:43.941 "read": true, 00:04:43.941 "write": true, 00:04:43.941 "unmap": true, 00:04:43.941 "flush": true, 00:04:43.941 "reset": true, 00:04:43.941 "nvme_admin": false, 00:04:43.941 "nvme_io": false, 00:04:43.941 "nvme_io_md": false, 00:04:43.941 "write_zeroes": true, 00:04:43.941 "zcopy": true, 00:04:43.941 "get_zone_info": false, 00:04:43.941 "zone_management": false, 00:04:43.941 "zone_append": false, 00:04:43.941 "compare": false, 00:04:43.941 "compare_and_write": false, 00:04:43.941 "abort": true, 00:04:43.941 "seek_hole": false, 00:04:43.941 "seek_data": false, 00:04:43.941 "copy": true, 00:04:43.941 "nvme_iov_md": false 00:04:43.941 }, 00:04:43.941 "memory_domains": [ 00:04:43.941 { 00:04:43.941 "dma_device_id": "system", 00:04:43.941 "dma_device_type": 1 00:04:43.941 }, 00:04:43.941 { 00:04:43.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.941 "dma_device_type": 2 00:04:43.941 } 00:04:43.941 ], 00:04:43.941 "driver_specific": {} 00:04:43.941 } 00:04:43.941 ]' 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.941 17:54:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.941 00:04:43.941 real 0m0.136s 00:04:43.941 user 0m0.084s 00:04:43.941 sys 0m0.023s 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.941 17:54:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.941 ************************************ 00:04:43.941 END TEST rpc_plugins 00:04:43.941 ************************************ 00:04:44.200 17:54:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.200 17:54:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.200 17:54:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.200 17:54:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.200 ************************************ 00:04:44.200 START TEST rpc_trace_cmd_test 00:04:44.200 ************************************ 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.200 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1462894", 00:04:44.200 "tpoint_group_mask": "0x8", 00:04:44.200 "iscsi_conn": { 00:04:44.200 "mask": "0x2", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "scsi": { 00:04:44.200 "mask": "0x4", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "bdev": { 00:04:44.200 "mask": "0x8", 00:04:44.200 "tpoint_mask": "0xffffffffffffffff" 00:04:44.200 }, 00:04:44.200 "nvmf_rdma": { 00:04:44.200 "mask": "0x10", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "nvmf_tcp": { 00:04:44.200 "mask": "0x20", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "ftl": { 00:04:44.200 "mask": "0x40", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "blobfs": { 00:04:44.200 "mask": "0x80", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "dsa": { 00:04:44.200 "mask": "0x200", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "thread": { 00:04:44.200 "mask": "0x400", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "nvme_pcie": { 00:04:44.200 "mask": "0x800", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "iaa": { 00:04:44.200 "mask": "0x1000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "nvme_tcp": { 00:04:44.200 "mask": "0x2000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "bdev_nvme": { 00:04:44.200 "mask": "0x4000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "sock": { 00:04:44.200 "mask": "0x8000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "blob": { 00:04:44.200 "mask": "0x10000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "bdev_raid": { 00:04:44.200 "mask": "0x20000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 }, 00:04:44.200 "scheduler": { 00:04:44.200 "mask": "0x40000", 00:04:44.200 "tpoint_mask": "0x0" 00:04:44.200 } 00:04:44.200 }' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.200 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.460 17:54:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.460 00:04:44.460 real 0m0.236s 00:04:44.460 user 0m0.188s 00:04:44.460 sys 0m0.040s 00:04:44.460 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 ************************************ 00:04:44.460 END TEST rpc_trace_cmd_test 00:04:44.460 ************************************ 00:04:44.460 17:54:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.460 17:54:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.460 17:54:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.460 17:54:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.460 17:54:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.460 17:54:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 ************************************ 00:04:44.460 START TEST rpc_daemon_integrity 00:04:44.460 ************************************ 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.460 { 00:04:44.460 "name": "Malloc2", 00:04:44.460 "aliases": [ 00:04:44.460 "d865ab66-136c-4c03-956c-b5d368da7dd8" 00:04:44.460 ], 00:04:44.460 "product_name": "Malloc disk", 00:04:44.460 "block_size": 512, 00:04:44.460 "num_blocks": 16384, 00:04:44.460 "uuid": "d865ab66-136c-4c03-956c-b5d368da7dd8", 00:04:44.460 "assigned_rate_limits": { 00:04:44.460 "rw_ios_per_sec": 0, 00:04:44.460 "rw_mbytes_per_sec": 0, 00:04:44.460 "r_mbytes_per_sec": 0, 00:04:44.460 "w_mbytes_per_sec": 0 00:04:44.460 }, 00:04:44.460 "claimed": false, 00:04:44.460 "zoned": false, 00:04:44.460 "supported_io_types": { 00:04:44.460 "read": true, 00:04:44.460 "write": true, 00:04:44.460 "unmap": true, 00:04:44.460 "flush": true, 00:04:44.460 "reset": true, 00:04:44.460 "nvme_admin": false, 00:04:44.460 "nvme_io": false, 00:04:44.460 "nvme_io_md": false, 00:04:44.460 "write_zeroes": true, 00:04:44.460 "zcopy": true, 00:04:44.460 "get_zone_info": false, 00:04:44.460 "zone_management": false, 00:04:44.460 "zone_append": false, 00:04:44.460 "compare": false, 00:04:44.460 "compare_and_write": false, 00:04:44.460 "abort": true, 00:04:44.460 "seek_hole": false, 00:04:44.460 "seek_data": false, 00:04:44.460 "copy": true, 00:04:44.460 "nvme_iov_md": false 00:04:44.460 }, 00:04:44.460 "memory_domains": [ 00:04:44.460 { 00:04:44.460 "dma_device_id": "system", 00:04:44.460 "dma_device_type": 1 00:04:44.460 }, 00:04:44.460 { 00:04:44.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.460 "dma_device_type": 2 00:04:44.460 } 00:04:44.460 ], 00:04:44.460 "driver_specific": {} 00:04:44.460 } 00:04:44.460 ]' 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.460 [2024-10-05 17:54:05.908347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.460 [2024-10-05 17:54:05.908380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.460 [2024-10-05 17:54:05.908396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4faf790 00:04:44.460 [2024-10-05 17:54:05.908406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.460 [2024-10-05 17:54:05.909329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.460 [2024-10-05 17:54:05.909353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.460 Passthru0 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.460 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.720 { 00:04:44.720 "name": "Malloc2", 00:04:44.720 "aliases": [ 00:04:44.720 "d865ab66-136c-4c03-956c-b5d368da7dd8" 00:04:44.720 ], 00:04:44.720 "product_name": "Malloc disk", 00:04:44.720 "block_size": 512, 00:04:44.720 "num_blocks": 16384, 00:04:44.720 "uuid": "d865ab66-136c-4c03-956c-b5d368da7dd8", 00:04:44.720 "assigned_rate_limits": { 00:04:44.720 "rw_ios_per_sec": 0, 00:04:44.720 "rw_mbytes_per_sec": 0, 00:04:44.720 "r_mbytes_per_sec": 0, 00:04:44.720 "w_mbytes_per_sec": 0 00:04:44.720 }, 00:04:44.720 "claimed": true, 00:04:44.720 "claim_type": "exclusive_write", 00:04:44.720 "zoned": false, 00:04:44.720 "supported_io_types": { 00:04:44.720 "read": true, 00:04:44.720 "write": true, 00:04:44.720 "unmap": true, 00:04:44.720 "flush": true, 00:04:44.720 "reset": true, 00:04:44.720 "nvme_admin": false, 00:04:44.720 "nvme_io": false, 00:04:44.720 "nvme_io_md": false, 00:04:44.720 "write_zeroes": true, 00:04:44.720 "zcopy": true, 00:04:44.720 "get_zone_info": false, 00:04:44.720 "zone_management": false, 00:04:44.720 "zone_append": false, 00:04:44.720 "compare": false, 00:04:44.720 "compare_and_write": false, 00:04:44.720 "abort": true, 00:04:44.720 "seek_hole": false, 00:04:44.720 "seek_data": false, 00:04:44.720 "copy": true, 00:04:44.720 "nvme_iov_md": false 00:04:44.720 }, 00:04:44.720 "memory_domains": [ 00:04:44.720 { 00:04:44.720 "dma_device_id": "system", 00:04:44.720 "dma_device_type": 1 00:04:44.720 }, 00:04:44.720 { 00:04:44.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.720 "dma_device_type": 2 00:04:44.720 } 00:04:44.720 ], 00:04:44.720 "driver_specific": {} 00:04:44.720 }, 00:04:44.720 { 00:04:44.720 "name": "Passthru0", 00:04:44.720 "aliases": [ 00:04:44.720 "4cc2c542-fc05-5cd0-b72d-ae35ba0f5285" 00:04:44.720 ], 00:04:44.720 "product_name": "passthru", 00:04:44.720 "block_size": 512, 00:04:44.720 "num_blocks": 16384, 00:04:44.720 "uuid": "4cc2c542-fc05-5cd0-b72d-ae35ba0f5285", 00:04:44.720 "assigned_rate_limits": { 00:04:44.720 "rw_ios_per_sec": 0, 00:04:44.720 "rw_mbytes_per_sec": 0, 00:04:44.720 "r_mbytes_per_sec": 0, 00:04:44.720 "w_mbytes_per_sec": 0 00:04:44.720 }, 00:04:44.720 "claimed": false, 00:04:44.720 "zoned": false, 00:04:44.720 "supported_io_types": { 00:04:44.720 "read": true, 00:04:44.720 "write": true, 00:04:44.720 "unmap": true, 00:04:44.720 "flush": true, 00:04:44.720 "reset": true, 00:04:44.720 "nvme_admin": false, 00:04:44.720 "nvme_io": false, 00:04:44.720 "nvme_io_md": false, 00:04:44.720 "write_zeroes": true, 00:04:44.720 "zcopy": true, 00:04:44.720 "get_zone_info": false, 00:04:44.720 "zone_management": false, 00:04:44.720 "zone_append": false, 00:04:44.720 "compare": false, 00:04:44.720 "compare_and_write": false, 00:04:44.720 "abort": true, 00:04:44.720 "seek_hole": false, 00:04:44.720 "seek_data": false, 00:04:44.720 "copy": true, 00:04:44.720 "nvme_iov_md": false 00:04:44.720 }, 00:04:44.720 "memory_domains": [ 00:04:44.720 { 00:04:44.720 "dma_device_id": "system", 00:04:44.720 "dma_device_type": 1 00:04:44.720 }, 00:04:44.720 { 00:04:44.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.720 "dma_device_type": 2 00:04:44.720 } 00:04:44.720 ], 00:04:44.720 "driver_specific": { 00:04:44.720 "passthru": { 00:04:44.720 "name": "Passthru0", 00:04:44.720 "base_bdev_name": "Malloc2" 00:04:44.720 } 00:04:44.720 } 00:04:44.720 } 00:04:44.720 ]' 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.720 17:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.720 17:54:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.720 17:54:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.720 17:54:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.720 00:04:44.720 real 0m0.276s 00:04:44.720 user 0m0.178s 00:04:44.720 sys 0m0.043s 00:04:44.720 17:54:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.720 17:54:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.720 ************************************ 00:04:44.720 END TEST rpc_daemon_integrity 00:04:44.720 ************************************ 00:04:44.720 17:54:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.720 17:54:06 rpc -- rpc/rpc.sh@84 -- # killprocess 1462894 00:04:44.720 17:54:06 rpc -- common/autotest_common.sh@950 -- # '[' -z 1462894 ']' 00:04:44.720 17:54:06 rpc -- common/autotest_common.sh@954 -- # kill -0 1462894 00:04:44.720 17:54:06 rpc -- common/autotest_common.sh@955 -- # uname 00:04:44.720 17:54:06 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.720 17:54:06 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462894 00:04:44.721 17:54:06 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.721 17:54:06 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.721 17:54:06 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462894' 00:04:44.721 killing process with pid 1462894 00:04:44.721 17:54:06 rpc -- common/autotest_common.sh@969 -- # kill 1462894 00:04:44.721 17:54:06 rpc -- common/autotest_common.sh@974 -- # wait 1462894 00:04:45.288 00:04:45.288 real 0m2.663s 00:04:45.288 user 0m3.335s 00:04:45.288 sys 0m0.838s 00:04:45.288 17:54:06 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.288 17:54:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.288 ************************************ 00:04:45.288 END TEST rpc 00:04:45.288 ************************************ 00:04:45.288 17:54:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.288 17:54:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.288 17:54:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.288 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:04:45.288 ************************************ 00:04:45.288 START TEST skip_rpc 00:04:45.288 ************************************ 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.288 * Looking for test storage... 00:04:45.288 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.288 17:54:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.288 --rc genhtml_branch_coverage=1 00:04:45.288 --rc genhtml_function_coverage=1 00:04:45.288 --rc genhtml_legend=1 00:04:45.288 --rc geninfo_all_blocks=1 00:04:45.288 --rc geninfo_unexecuted_blocks=1 00:04:45.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.288 ' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.288 --rc genhtml_branch_coverage=1 00:04:45.288 --rc genhtml_function_coverage=1 00:04:45.288 --rc genhtml_legend=1 00:04:45.288 --rc geninfo_all_blocks=1 00:04:45.288 --rc geninfo_unexecuted_blocks=1 00:04:45.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.288 ' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.288 --rc genhtml_branch_coverage=1 00:04:45.288 --rc genhtml_function_coverage=1 00:04:45.288 --rc genhtml_legend=1 00:04:45.288 --rc geninfo_all_blocks=1 00:04:45.288 --rc geninfo_unexecuted_blocks=1 00:04:45.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.288 ' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.288 --rc genhtml_branch_coverage=1 00:04:45.288 --rc genhtml_function_coverage=1 00:04:45.288 --rc genhtml_legend=1 00:04:45.288 --rc geninfo_all_blocks=1 00:04:45.288 --rc geninfo_unexecuted_blocks=1 00:04:45.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.288 ' 00:04:45.288 17:54:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:45.288 17:54:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:45.288 17:54:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.288 17:54:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.547 ************************************ 00:04:45.548 START TEST skip_rpc 00:04:45.548 ************************************ 00:04:45.548 17:54:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:45.548 17:54:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1463881 00:04:45.548 17:54:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.548 17:54:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.548 17:54:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.548 [2024-10-05 17:54:06.807729] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:45.548 [2024-10-05 17:54:06.807796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463881 ] 00:04:45.548 [2024-10-05 17:54:06.874375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.548 [2024-10-05 17:54:06.952113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1463881 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1463881 ']' 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1463881 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463881 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463881' 00:04:50.820 killing process with pid 1463881 00:04:50.820 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1463881 00:04:50.821 17:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1463881 00:04:50.821 00:04:50.821 real 0m5.401s 00:04:50.821 user 0m5.160s 00:04:50.821 sys 0m0.289s 00:04:50.821 17:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.821 17:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.821 ************************************ 00:04:50.821 END TEST skip_rpc 00:04:50.821 ************************************ 00:04:50.821 17:54:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.821 17:54:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.821 17:54:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.821 17:54:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.821 ************************************ 00:04:50.821 START TEST skip_rpc_with_json 00:04:50.821 ************************************ 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1464759 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1464759 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1464759 ']' 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.821 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.080 [2024-10-05 17:54:12.284147] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:51.080 [2024-10-05 17:54:12.284237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464759 ] 00:04:51.080 [2024-10-05 17:54:12.352280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.080 [2024-10-05 17:54:12.428697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.339 [2024-10-05 17:54:12.652608] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.339 request: 00:04:51.339 { 00:04:51.339 "trtype": "tcp", 00:04:51.339 "method": "nvmf_get_transports", 00:04:51.339 "req_id": 1 00:04:51.339 } 00:04:51.339 Got JSON-RPC error response 00:04:51.339 response: 00:04:51.339 { 00:04:51.339 "code": -19, 00:04:51.339 "message": "No such device" 00:04:51.339 } 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.339 [2024-10-05 17:54:12.660690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.339 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.598 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.598 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.598 { 00:04:51.598 "subsystems": [ 00:04:51.598 { 00:04:51.598 "subsystem": "scheduler", 00:04:51.598 "config": [ 00:04:51.598 { 00:04:51.598 "method": "framework_set_scheduler", 00:04:51.598 "params": { 00:04:51.598 "name": "static" 00:04:51.598 } 00:04:51.598 } 00:04:51.598 ] 00:04:51.598 }, 00:04:51.598 { 00:04:51.598 "subsystem": "vmd", 00:04:51.598 "config": [] 00:04:51.598 }, 00:04:51.598 { 00:04:51.598 "subsystem": "sock", 00:04:51.598 "config": [ 00:04:51.598 { 00:04:51.598 "method": "sock_set_default_impl", 00:04:51.598 "params": { 00:04:51.598 "impl_name": "posix" 00:04:51.598 } 00:04:51.598 }, 00:04:51.598 { 00:04:51.598 "method": "sock_impl_set_options", 00:04:51.598 "params": { 00:04:51.598 "impl_name": "ssl", 00:04:51.598 "recv_buf_size": 4096, 00:04:51.598 "send_buf_size": 4096, 00:04:51.598 "enable_recv_pipe": true, 00:04:51.598 "enable_quickack": false, 00:04:51.598 "enable_placement_id": 0, 00:04:51.598 "enable_zerocopy_send_server": true, 00:04:51.598 "enable_zerocopy_send_client": false, 00:04:51.598 "zerocopy_threshold": 0, 00:04:51.598 "tls_version": 0, 00:04:51.598 "enable_ktls": false 00:04:51.598 } 00:04:51.598 }, 00:04:51.598 { 00:04:51.598 "method": "sock_impl_set_options", 00:04:51.598 "params": { 00:04:51.598 "impl_name": "posix", 00:04:51.598 "recv_buf_size": 2097152, 00:04:51.598 "send_buf_size": 2097152, 00:04:51.598 "enable_recv_pipe": true, 00:04:51.598 "enable_quickack": false, 00:04:51.598 "enable_placement_id": 0, 00:04:51.598 "enable_zerocopy_send_server": true, 00:04:51.598 "enable_zerocopy_send_client": false, 00:04:51.598 "zerocopy_threshold": 0, 00:04:51.598 "tls_version": 0, 00:04:51.599 "enable_ktls": false 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "iobuf", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "iobuf_set_options", 00:04:51.599 "params": { 00:04:51.599 "small_pool_count": 8192, 00:04:51.599 "large_pool_count": 1024, 00:04:51.599 "small_bufsize": 8192, 00:04:51.599 "large_bufsize": 135168 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "keyring", 00:04:51.599 "config": [] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "vfio_user_target", 00:04:51.599 "config": null 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "fsdev", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "fsdev_set_opts", 00:04:51.599 "params": { 00:04:51.599 "fsdev_io_pool_size": 65535, 00:04:51.599 "fsdev_io_cache_size": 256 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "accel", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "accel_set_options", 00:04:51.599 "params": { 00:04:51.599 "small_cache_size": 128, 00:04:51.599 "large_cache_size": 16, 00:04:51.599 "task_count": 2048, 00:04:51.599 "sequence_count": 2048, 00:04:51.599 "buf_count": 2048 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "bdev", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "bdev_set_options", 00:04:51.599 "params": { 00:04:51.599 "bdev_io_pool_size": 65535, 00:04:51.599 "bdev_io_cache_size": 256, 00:04:51.599 "bdev_auto_examine": true, 00:04:51.599 "iobuf_small_cache_size": 128, 00:04:51.599 "iobuf_large_cache_size": 16 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "bdev_raid_set_options", 00:04:51.599 "params": { 00:04:51.599 "process_window_size_kb": 1024, 00:04:51.599 "process_max_bandwidth_mb_sec": 0 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "bdev_nvme_set_options", 00:04:51.599 "params": { 00:04:51.599 "action_on_timeout": "none", 00:04:51.599 "timeout_us": 0, 00:04:51.599 "timeout_admin_us": 0, 00:04:51.599 "keep_alive_timeout_ms": 10000, 00:04:51.599 "arbitration_burst": 0, 00:04:51.599 "low_priority_weight": 0, 00:04:51.599 "medium_priority_weight": 0, 00:04:51.599 "high_priority_weight": 0, 00:04:51.599 "nvme_adminq_poll_period_us": 10000, 00:04:51.599 "nvme_ioq_poll_period_us": 0, 00:04:51.599 "io_queue_requests": 0, 00:04:51.599 "delay_cmd_submit": true, 00:04:51.599 "transport_retry_count": 4, 00:04:51.599 "bdev_retry_count": 3, 00:04:51.599 "transport_ack_timeout": 0, 00:04:51.599 "ctrlr_loss_timeout_sec": 0, 00:04:51.599 "reconnect_delay_sec": 0, 00:04:51.599 "fast_io_fail_timeout_sec": 0, 00:04:51.599 "disable_auto_failback": false, 00:04:51.599 "generate_uuids": false, 00:04:51.599 "transport_tos": 0, 00:04:51.599 "nvme_error_stat": false, 00:04:51.599 "rdma_srq_size": 0, 00:04:51.599 "io_path_stat": false, 00:04:51.599 "allow_accel_sequence": false, 00:04:51.599 "rdma_max_cq_size": 0, 00:04:51.599 "rdma_cm_event_timeout_ms": 0, 00:04:51.599 "dhchap_digests": [ 00:04:51.599 "sha256", 00:04:51.599 "sha384", 00:04:51.599 "sha512" 00:04:51.599 ], 00:04:51.599 "dhchap_dhgroups": [ 00:04:51.599 "null", 00:04:51.599 "ffdhe2048", 00:04:51.599 "ffdhe3072", 00:04:51.599 "ffdhe4096", 00:04:51.599 "ffdhe6144", 00:04:51.599 "ffdhe8192" 00:04:51.599 ] 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "bdev_nvme_set_hotplug", 00:04:51.599 "params": { 00:04:51.599 "period_us": 100000, 00:04:51.599 "enable": false 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "bdev_iscsi_set_options", 00:04:51.599 "params": { 00:04:51.599 "timeout_sec": 30 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "bdev_wait_for_examine" 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "nvmf", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "nvmf_set_config", 00:04:51.599 "params": { 00:04:51.599 "discovery_filter": "match_any", 00:04:51.599 "admin_cmd_passthru": { 00:04:51.599 "identify_ctrlr": false 00:04:51.599 }, 00:04:51.599 "dhchap_digests": [ 00:04:51.599 "sha256", 00:04:51.599 "sha384", 00:04:51.599 "sha512" 00:04:51.599 ], 00:04:51.599 "dhchap_dhgroups": [ 00:04:51.599 "null", 00:04:51.599 "ffdhe2048", 00:04:51.599 "ffdhe3072", 00:04:51.599 "ffdhe4096", 00:04:51.599 "ffdhe6144", 00:04:51.599 "ffdhe8192" 00:04:51.599 ] 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "nvmf_set_max_subsystems", 00:04:51.599 "params": { 00:04:51.599 "max_subsystems": 1024 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "nvmf_set_crdt", 00:04:51.599 "params": { 00:04:51.599 "crdt1": 0, 00:04:51.599 "crdt2": 0, 00:04:51.599 "crdt3": 0 00:04:51.599 } 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "method": "nvmf_create_transport", 00:04:51.599 "params": { 00:04:51.599 "trtype": "TCP", 00:04:51.599 "max_queue_depth": 128, 00:04:51.599 "max_io_qpairs_per_ctrlr": 127, 00:04:51.599 "in_capsule_data_size": 4096, 00:04:51.599 "max_io_size": 131072, 00:04:51.599 "io_unit_size": 131072, 00:04:51.599 "max_aq_depth": 128, 00:04:51.599 "num_shared_buffers": 511, 00:04:51.599 "buf_cache_size": 4294967295, 00:04:51.599 "dif_insert_or_strip": false, 00:04:51.599 "zcopy": false, 00:04:51.599 "c2h_success": true, 00:04:51.599 "sock_priority": 0, 00:04:51.599 "abort_timeout_sec": 1, 00:04:51.599 "ack_timeout": 0, 00:04:51.599 "data_wr_pool_size": 0 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "nbd", 00:04:51.599 "config": [] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "ublk", 00:04:51.599 "config": [] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "vhost_blk", 00:04:51.599 "config": [] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "scsi", 00:04:51.599 "config": null 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "iscsi", 00:04:51.599 "config": [ 00:04:51.599 { 00:04:51.599 "method": "iscsi_set_options", 00:04:51.599 "params": { 00:04:51.599 "node_base": "iqn.2016-06.io.spdk", 00:04:51.599 "max_sessions": 128, 00:04:51.599 "max_connections_per_session": 2, 00:04:51.599 "max_queue_depth": 64, 00:04:51.599 "default_time2wait": 2, 00:04:51.599 "default_time2retain": 20, 00:04:51.599 "first_burst_length": 8192, 00:04:51.599 "immediate_data": true, 00:04:51.599 "allow_duplicated_isid": false, 00:04:51.599 "error_recovery_level": 0, 00:04:51.599 "nop_timeout": 60, 00:04:51.599 "nop_in_interval": 30, 00:04:51.599 "disable_chap": false, 00:04:51.599 "require_chap": false, 00:04:51.599 "mutual_chap": false, 00:04:51.599 "chap_group": 0, 00:04:51.599 "max_large_datain_per_connection": 64, 00:04:51.599 "max_r2t_per_connection": 4, 00:04:51.599 "pdu_pool_size": 36864, 00:04:51.599 "immediate_data_pool_size": 16384, 00:04:51.599 "data_out_pool_size": 2048 00:04:51.599 } 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 }, 00:04:51.599 { 00:04:51.599 "subsystem": "vhost_scsi", 00:04:51.599 "config": [] 00:04:51.599 } 00:04:51.599 ] 00:04:51.599 } 00:04:51.599 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.599 17:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1464759 00:04:51.599 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1464759 ']' 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1464759 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464759 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464759' 00:04:51.600 killing process with pid 1464759 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1464759 00:04:51.600 17:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1464759 00:04:51.859 17:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1465026 00:04:51.859 17:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:51.859 17:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1465026 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1465026 ']' 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1465026 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465026 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465026' 00:04:57.130 killing process with pid 1465026 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1465026 00:04:57.130 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1465026 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:57.390 00:04:57.390 real 0m6.342s 00:04:57.390 user 0m6.003s 00:04:57.390 sys 0m0.634s 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 ************************************ 00:04:57.390 END TEST skip_rpc_with_json 00:04:57.390 ************************************ 00:04:57.390 17:54:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 ************************************ 00:04:57.390 START TEST skip_rpc_with_delay 00:04:57.390 ************************************ 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.390 [2024-10-05 17:54:18.693045] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.390 [2024-10-05 17:54:18.693156] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.390 00:04:57.390 real 0m0.037s 00:04:57.390 user 0m0.014s 00:04:57.390 sys 0m0.023s 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.390 17:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 ************************************ 00:04:57.390 END TEST skip_rpc_with_delay 00:04:57.390 ************************************ 00:04:57.390 17:54:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.390 17:54:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.390 17:54:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.390 17:54:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 ************************************ 00:04:57.390 START TEST exit_on_failed_rpc_init 00:04:57.390 ************************************ 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1465916 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1465916 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1465916 ']' 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.390 17:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 [2024-10-05 17:54:18.805932] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:57.391 [2024-10-05 17:54:18.806001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465916 ] 00:04:57.650 [2024-10-05 17:54:18.873599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.650 [2024-10-05 17:54:18.951412] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.909 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.909 [2024-10-05 17:54:19.191577] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:57.909 [2024-10-05 17:54:19.191640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466106 ] 00:04:57.909 [2024-10-05 17:54:19.258080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.909 [2024-10-05 17:54:19.332688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.909 [2024-10-05 17:54:19.332771] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.909 [2024-10-05 17:54:19.332784] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.909 [2024-10-05 17:54:19.332791] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.168 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1465916 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1465916 ']' 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1465916 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465916 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465916' 00:04:58.169 killing process with pid 1465916 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1465916 00:04:58.169 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1465916 00:04:58.428 00:04:58.428 real 0m0.994s 00:04:58.428 user 0m1.030s 00:04:58.428 sys 0m0.422s 00:04:58.428 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.428 17:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.428 ************************************ 00:04:58.428 END TEST exit_on_failed_rpc_init 00:04:58.428 ************************************ 00:04:58.428 17:54:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:58.428 00:04:58.428 real 0m13.269s 00:04:58.428 user 0m12.415s 00:04:58.428 sys 0m1.693s 00:04:58.428 17:54:19 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.428 17:54:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.428 ************************************ 00:04:58.428 END TEST skip_rpc 00:04:58.428 ************************************ 00:04:58.428 17:54:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.428 17:54:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.428 17:54:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.428 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:04:58.687 ************************************ 00:04:58.687 START TEST rpc_client 00:04:58.687 ************************************ 00:04:58.687 17:54:19 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:58.687 * Looking for test storage... 00:04:58.687 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.687 17:54:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.687 --rc genhtml_branch_coverage=1 00:04:58.687 --rc genhtml_function_coverage=1 00:04:58.687 --rc genhtml_legend=1 00:04:58.687 --rc geninfo_all_blocks=1 00:04:58.687 --rc geninfo_unexecuted_blocks=1 00:04:58.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.687 ' 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.687 --rc genhtml_branch_coverage=1 00:04:58.687 --rc genhtml_function_coverage=1 00:04:58.687 --rc genhtml_legend=1 00:04:58.687 --rc geninfo_all_blocks=1 00:04:58.687 --rc geninfo_unexecuted_blocks=1 00:04:58.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.687 ' 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.687 --rc genhtml_branch_coverage=1 00:04:58.687 --rc genhtml_function_coverage=1 00:04:58.687 --rc genhtml_legend=1 00:04:58.687 --rc geninfo_all_blocks=1 00:04:58.687 --rc geninfo_unexecuted_blocks=1 00:04:58.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.687 ' 00:04:58.687 17:54:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.687 --rc genhtml_branch_coverage=1 00:04:58.687 --rc genhtml_function_coverage=1 00:04:58.687 --rc genhtml_legend=1 00:04:58.687 --rc geninfo_all_blocks=1 00:04:58.687 --rc geninfo_unexecuted_blocks=1 00:04:58.687 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.687 ' 00:04:58.687 17:54:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:58.687 OK 00:04:58.687 17:54:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.687 00:04:58.688 real 0m0.210s 00:04:58.688 user 0m0.118s 00:04:58.688 sys 0m0.105s 00:04:58.688 17:54:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.688 17:54:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.688 ************************************ 00:04:58.688 END TEST rpc_client 00:04:58.688 ************************************ 00:04:58.947 17:54:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.947 17:54:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.947 17:54:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.947 17:54:20 -- common/autotest_common.sh@10 -- # set +x 00:04:58.947 ************************************ 00:04:58.947 START TEST json_config 00:04:58.947 ************************************ 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.947 17:54:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.947 17:54:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.947 17:54:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.947 17:54:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.947 17:54:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.947 17:54:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.947 17:54:20 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.947 17:54:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.947 17:54:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.947 17:54:20 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.947 17:54:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.947 17:54:20 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.947 17:54:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.947 17:54:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.947 17:54:20 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.947 --rc genhtml_branch_coverage=1 00:04:58.947 --rc genhtml_function_coverage=1 00:04:58.947 --rc genhtml_legend=1 00:04:58.947 --rc geninfo_all_blocks=1 00:04:58.947 --rc geninfo_unexecuted_blocks=1 00:04:58.947 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.947 ' 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.947 --rc genhtml_branch_coverage=1 00:04:58.947 --rc genhtml_function_coverage=1 00:04:58.947 --rc genhtml_legend=1 00:04:58.947 --rc geninfo_all_blocks=1 00:04:58.947 --rc geninfo_unexecuted_blocks=1 00:04:58.947 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.947 ' 00:04:58.947 17:54:20 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.947 --rc genhtml_branch_coverage=1 00:04:58.947 --rc genhtml_function_coverage=1 00:04:58.947 --rc genhtml_legend=1 00:04:58.948 --rc geninfo_all_blocks=1 00:04:58.948 --rc geninfo_unexecuted_blocks=1 00:04:58.948 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.948 ' 00:04:58.948 17:54:20 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.948 --rc genhtml_branch_coverage=1 00:04:58.948 --rc genhtml_function_coverage=1 00:04:58.948 --rc genhtml_legend=1 00:04:58.948 --rc geninfo_all_blocks=1 00:04:58.948 --rc geninfo_unexecuted_blocks=1 00:04:58.948 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:58.948 ' 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:58.948 17:54:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.948 17:54:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.948 17:54:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.948 17:54:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.948 17:54:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.948 17:54:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.948 17:54:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.948 17:54:20 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.948 17:54:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.948 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.948 17:54:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:58.948 WARNING: No tests are enabled so not running JSON configuration tests 00:04:58.948 17:54:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:58.948 00:04:58.948 real 0m0.203s 00:04:58.948 user 0m0.128s 00:04:58.948 sys 0m0.084s 00:04:58.948 17:54:20 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.948 17:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.948 ************************************ 00:04:58.948 END TEST json_config 00:04:58.948 ************************************ 00:04:59.208 17:54:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.208 17:54:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.208 17:54:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.208 17:54:20 -- common/autotest_common.sh@10 -- # set +x 00:04:59.208 ************************************ 00:04:59.208 START TEST json_config_extra_key 00:04:59.208 ************************************ 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.208 --rc genhtml_branch_coverage=1 00:04:59.208 --rc genhtml_function_coverage=1 00:04:59.208 --rc genhtml_legend=1 00:04:59.208 --rc geninfo_all_blocks=1 00:04:59.208 --rc geninfo_unexecuted_blocks=1 00:04:59.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.208 ' 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.208 --rc genhtml_branch_coverage=1 00:04:59.208 --rc genhtml_function_coverage=1 00:04:59.208 --rc genhtml_legend=1 00:04:59.208 --rc geninfo_all_blocks=1 00:04:59.208 --rc geninfo_unexecuted_blocks=1 00:04:59.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.208 ' 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.208 --rc genhtml_branch_coverage=1 00:04:59.208 --rc genhtml_function_coverage=1 00:04:59.208 --rc genhtml_legend=1 00:04:59.208 --rc geninfo_all_blocks=1 00:04:59.208 --rc geninfo_unexecuted_blocks=1 00:04:59.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.208 ' 00:04:59.208 17:54:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.208 --rc genhtml_branch_coverage=1 00:04:59.208 --rc genhtml_function_coverage=1 00:04:59.208 --rc genhtml_legend=1 00:04:59.208 --rc geninfo_all_blocks=1 00:04:59.208 --rc geninfo_unexecuted_blocks=1 00:04:59.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:59.208 ' 00:04:59.208 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.208 17:54:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.208 17:54:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.208 17:54:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.208 17:54:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.208 17:54:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.208 17:54:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.208 17:54:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.209 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.209 17:54:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.209 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.468 INFO: launching applications... 00:04:59.468 17:54:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1466529 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.468 Waiting for target to run... 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1466529 /var/tmp/spdk_tgt.sock 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1466529 ']' 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.468 17:54:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.468 17:54:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.468 [2024-10-05 17:54:20.697851] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:59.468 [2024-10-05 17:54:20.697917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466529 ] 00:04:59.727 [2024-10-05 17:54:20.986310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.727 [2024-10-05 17:54:21.051693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.295 17:54:21 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.295 17:54:21 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.295 00:05:00.295 17:54:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.295 INFO: shutting down applications... 00:05:00.295 17:54:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1466529 ]] 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1466529 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1466529 00:05:00.295 17:54:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1466529 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.864 17:54:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.864 SPDK target shutdown done 00:05:00.864 17:54:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.864 Success 00:05:00.864 00:05:00.864 real 0m1.590s 00:05:00.864 user 0m1.362s 00:05:00.864 sys 0m0.415s 00:05:00.864 17:54:22 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.864 17:54:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.864 ************************************ 00:05:00.864 END TEST json_config_extra_key 00:05:00.864 ************************************ 00:05:00.864 17:54:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.864 17:54:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.864 17:54:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.864 17:54:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.864 ************************************ 00:05:00.864 START TEST alias_rpc 00:05:00.864 ************************************ 00:05:00.864 17:54:22 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.864 * Looking for test storage... 00:05:00.864 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.864 17:54:22 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.864 17:54:22 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.864 17:54:22 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.864 17:54:22 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.865 17:54:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.124 17:54:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.124 17:54:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.124 17:54:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.124 17:54:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.124 17:54:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.124 --rc genhtml_branch_coverage=1 00:05:01.124 --rc genhtml_function_coverage=1 00:05:01.124 --rc genhtml_legend=1 00:05:01.124 --rc geninfo_all_blocks=1 00:05:01.124 --rc geninfo_unexecuted_blocks=1 00:05:01.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.124 ' 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.124 --rc genhtml_branch_coverage=1 00:05:01.124 --rc genhtml_function_coverage=1 00:05:01.124 --rc genhtml_legend=1 00:05:01.124 --rc geninfo_all_blocks=1 00:05:01.124 --rc geninfo_unexecuted_blocks=1 00:05:01.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.124 ' 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.124 --rc genhtml_branch_coverage=1 00:05:01.124 --rc genhtml_function_coverage=1 00:05:01.124 --rc genhtml_legend=1 00:05:01.124 --rc geninfo_all_blocks=1 00:05:01.124 --rc geninfo_unexecuted_blocks=1 00:05:01.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.124 ' 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.124 --rc genhtml_branch_coverage=1 00:05:01.124 --rc genhtml_function_coverage=1 00:05:01.124 --rc genhtml_legend=1 00:05:01.124 --rc geninfo_all_blocks=1 00:05:01.124 --rc geninfo_unexecuted_blocks=1 00:05:01.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:01.124 ' 00:05:01.124 17:54:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.124 17:54:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1466905 00:05:01.124 17:54:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1466905 00:05:01.124 17:54:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1466905 ']' 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.124 17:54:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.124 [2024-10-05 17:54:22.358529] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:01.124 [2024-10-05 17:54:22.358612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466905 ] 00:05:01.124 [2024-10-05 17:54:22.425819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.124 [2024-10-05 17:54:22.497557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.061 17:54:23 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.062 17:54:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:02.062 17:54:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1466905 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1466905 ']' 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1466905 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1466905 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1466905' 00:05:02.062 killing process with pid 1466905 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@969 -- # kill 1466905 00:05:02.062 17:54:23 alias_rpc -- common/autotest_common.sh@974 -- # wait 1466905 00:05:02.629 00:05:02.629 real 0m1.652s 00:05:02.629 user 0m1.743s 00:05:02.629 sys 0m0.497s 00:05:02.629 17:54:23 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.629 17:54:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.629 ************************************ 00:05:02.629 END TEST alias_rpc 00:05:02.629 ************************************ 00:05:02.629 17:54:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:02.629 17:54:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.629 17:54:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.630 17:54:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.630 17:54:23 -- common/autotest_common.sh@10 -- # set +x 00:05:02.630 ************************************ 00:05:02.630 START TEST spdkcli_tcp 00:05:02.630 ************************************ 00:05:02.630 17:54:23 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.630 * Looking for test storage... 00:05:02.630 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:02.630 17:54:23 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.630 17:54:23 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.630 17:54:23 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.630 17:54:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.630 --rc genhtml_branch_coverage=1 00:05:02.630 --rc genhtml_function_coverage=1 00:05:02.630 --rc genhtml_legend=1 00:05:02.630 --rc geninfo_all_blocks=1 00:05:02.630 --rc geninfo_unexecuted_blocks=1 00:05:02.630 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.630 ' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.630 --rc genhtml_branch_coverage=1 00:05:02.630 --rc genhtml_function_coverage=1 00:05:02.630 --rc genhtml_legend=1 00:05:02.630 --rc geninfo_all_blocks=1 00:05:02.630 --rc geninfo_unexecuted_blocks=1 00:05:02.630 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.630 ' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.630 --rc genhtml_branch_coverage=1 00:05:02.630 --rc genhtml_function_coverage=1 00:05:02.630 --rc genhtml_legend=1 00:05:02.630 --rc geninfo_all_blocks=1 00:05:02.630 --rc geninfo_unexecuted_blocks=1 00:05:02.630 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.630 ' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.630 --rc genhtml_branch_coverage=1 00:05:02.630 --rc genhtml_function_coverage=1 00:05:02.630 --rc genhtml_legend=1 00:05:02.630 --rc geninfo_all_blocks=1 00:05:02.630 --rc geninfo_unexecuted_blocks=1 00:05:02.630 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:02.630 ' 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1467237 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.630 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1467237 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1467237 ']' 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.630 17:54:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.630 [2024-10-05 17:54:24.085217] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:02.630 [2024-10-05 17:54:24.085288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467237 ] 00:05:02.889 [2024-10-05 17:54:24.151287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.889 [2024-10-05 17:54:24.226129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.889 [2024-10-05 17:54:24.226131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.146 17:54:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.146 17:54:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:03.146 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1467251 00:05:03.146 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.146 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.146 [ 00:05:03.146 "spdk_get_version", 00:05:03.146 "rpc_get_methods", 00:05:03.146 "notify_get_notifications", 00:05:03.146 "notify_get_types", 00:05:03.146 "trace_get_info", 00:05:03.146 "trace_get_tpoint_group_mask", 00:05:03.146 "trace_disable_tpoint_group", 00:05:03.146 "trace_enable_tpoint_group", 00:05:03.146 "trace_clear_tpoint_mask", 00:05:03.146 "trace_set_tpoint_mask", 00:05:03.146 "fsdev_set_opts", 00:05:03.146 "fsdev_get_opts", 00:05:03.146 "framework_get_pci_devices", 00:05:03.146 "framework_get_config", 00:05:03.146 "framework_get_subsystems", 00:05:03.146 "vfu_tgt_set_base_path", 00:05:03.146 "keyring_get_keys", 00:05:03.146 "iobuf_get_stats", 00:05:03.146 "iobuf_set_options", 00:05:03.146 "sock_get_default_impl", 00:05:03.146 "sock_set_default_impl", 00:05:03.146 "sock_impl_set_options", 00:05:03.146 "sock_impl_get_options", 00:05:03.146 "vmd_rescan", 00:05:03.146 "vmd_remove_device", 00:05:03.146 "vmd_enable", 00:05:03.146 "accel_get_stats", 00:05:03.146 "accel_set_options", 00:05:03.146 "accel_set_driver", 00:05:03.146 "accel_crypto_key_destroy", 00:05:03.146 "accel_crypto_keys_get", 00:05:03.146 "accel_crypto_key_create", 00:05:03.146 "accel_assign_opc", 00:05:03.146 "accel_get_module_info", 00:05:03.146 "accel_get_opc_assignments", 00:05:03.146 "bdev_get_histogram", 00:05:03.146 "bdev_enable_histogram", 00:05:03.146 "bdev_set_qos_limit", 00:05:03.146 "bdev_set_qd_sampling_period", 00:05:03.146 "bdev_get_bdevs", 00:05:03.146 "bdev_reset_iostat", 00:05:03.146 "bdev_get_iostat", 00:05:03.146 "bdev_examine", 00:05:03.146 "bdev_wait_for_examine", 00:05:03.146 "bdev_set_options", 00:05:03.146 "scsi_get_devices", 00:05:03.146 "thread_set_cpumask", 00:05:03.146 "scheduler_set_options", 00:05:03.146 "framework_get_governor", 00:05:03.146 "framework_get_scheduler", 00:05:03.147 "framework_set_scheduler", 00:05:03.147 "framework_get_reactors", 00:05:03.147 "thread_get_io_channels", 00:05:03.147 "thread_get_pollers", 00:05:03.147 "thread_get_stats", 00:05:03.147 "framework_monitor_context_switch", 00:05:03.147 "spdk_kill_instance", 00:05:03.147 "log_enable_timestamps", 00:05:03.147 "log_get_flags", 00:05:03.147 "log_clear_flag", 00:05:03.147 "log_set_flag", 00:05:03.147 "log_get_level", 00:05:03.147 "log_set_level", 00:05:03.147 "log_get_print_level", 00:05:03.147 "log_set_print_level", 00:05:03.147 "framework_enable_cpumask_locks", 00:05:03.147 "framework_disable_cpumask_locks", 00:05:03.147 "framework_wait_init", 00:05:03.147 "framework_start_init", 00:05:03.147 "virtio_blk_create_transport", 00:05:03.147 "virtio_blk_get_transports", 00:05:03.147 "vhost_controller_set_coalescing", 00:05:03.147 "vhost_get_controllers", 00:05:03.147 "vhost_delete_controller", 00:05:03.147 "vhost_create_blk_controller", 00:05:03.147 "vhost_scsi_controller_remove_target", 00:05:03.147 "vhost_scsi_controller_add_target", 00:05:03.147 "vhost_start_scsi_controller", 00:05:03.147 "vhost_create_scsi_controller", 00:05:03.147 "ublk_recover_disk", 00:05:03.147 "ublk_get_disks", 00:05:03.147 "ublk_stop_disk", 00:05:03.147 "ublk_start_disk", 00:05:03.147 "ublk_destroy_target", 00:05:03.147 "ublk_create_target", 00:05:03.147 "nbd_get_disks", 00:05:03.147 "nbd_stop_disk", 00:05:03.147 "nbd_start_disk", 00:05:03.147 "env_dpdk_get_mem_stats", 00:05:03.147 "nvmf_stop_mdns_prr", 00:05:03.147 "nvmf_publish_mdns_prr", 00:05:03.147 "nvmf_subsystem_get_listeners", 00:05:03.147 "nvmf_subsystem_get_qpairs", 00:05:03.147 "nvmf_subsystem_get_controllers", 00:05:03.147 "nvmf_get_stats", 00:05:03.147 "nvmf_get_transports", 00:05:03.147 "nvmf_create_transport", 00:05:03.147 "nvmf_get_targets", 00:05:03.147 "nvmf_delete_target", 00:05:03.147 "nvmf_create_target", 00:05:03.147 "nvmf_subsystem_allow_any_host", 00:05:03.147 "nvmf_subsystem_set_keys", 00:05:03.147 "nvmf_subsystem_remove_host", 00:05:03.147 "nvmf_subsystem_add_host", 00:05:03.147 "nvmf_ns_remove_host", 00:05:03.147 "nvmf_ns_add_host", 00:05:03.147 "nvmf_subsystem_remove_ns", 00:05:03.147 "nvmf_subsystem_set_ns_ana_group", 00:05:03.147 "nvmf_subsystem_add_ns", 00:05:03.147 "nvmf_subsystem_listener_set_ana_state", 00:05:03.147 "nvmf_discovery_get_referrals", 00:05:03.147 "nvmf_discovery_remove_referral", 00:05:03.147 "nvmf_discovery_add_referral", 00:05:03.147 "nvmf_subsystem_remove_listener", 00:05:03.147 "nvmf_subsystem_add_listener", 00:05:03.147 "nvmf_delete_subsystem", 00:05:03.147 "nvmf_create_subsystem", 00:05:03.147 "nvmf_get_subsystems", 00:05:03.147 "nvmf_set_crdt", 00:05:03.147 "nvmf_set_config", 00:05:03.147 "nvmf_set_max_subsystems", 00:05:03.147 "iscsi_get_histogram", 00:05:03.147 "iscsi_enable_histogram", 00:05:03.147 "iscsi_set_options", 00:05:03.147 "iscsi_get_auth_groups", 00:05:03.147 "iscsi_auth_group_remove_secret", 00:05:03.147 "iscsi_auth_group_add_secret", 00:05:03.147 "iscsi_delete_auth_group", 00:05:03.147 "iscsi_create_auth_group", 00:05:03.147 "iscsi_set_discovery_auth", 00:05:03.147 "iscsi_get_options", 00:05:03.147 "iscsi_target_node_request_logout", 00:05:03.147 "iscsi_target_node_set_redirect", 00:05:03.147 "iscsi_target_node_set_auth", 00:05:03.147 "iscsi_target_node_add_lun", 00:05:03.147 "iscsi_get_stats", 00:05:03.147 "iscsi_get_connections", 00:05:03.147 "iscsi_portal_group_set_auth", 00:05:03.147 "iscsi_start_portal_group", 00:05:03.147 "iscsi_delete_portal_group", 00:05:03.147 "iscsi_create_portal_group", 00:05:03.147 "iscsi_get_portal_groups", 00:05:03.147 "iscsi_delete_target_node", 00:05:03.147 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.147 "iscsi_target_node_add_pg_ig_maps", 00:05:03.147 "iscsi_create_target_node", 00:05:03.147 "iscsi_get_target_nodes", 00:05:03.147 "iscsi_delete_initiator_group", 00:05:03.147 "iscsi_initiator_group_remove_initiators", 00:05:03.147 "iscsi_initiator_group_add_initiators", 00:05:03.147 "iscsi_create_initiator_group", 00:05:03.147 "iscsi_get_initiator_groups", 00:05:03.147 "fsdev_aio_delete", 00:05:03.147 "fsdev_aio_create", 00:05:03.147 "keyring_linux_set_options", 00:05:03.147 "keyring_file_remove_key", 00:05:03.147 "keyring_file_add_key", 00:05:03.147 "vfu_virtio_create_fs_endpoint", 00:05:03.147 "vfu_virtio_create_scsi_endpoint", 00:05:03.147 "vfu_virtio_scsi_remove_target", 00:05:03.147 "vfu_virtio_scsi_add_target", 00:05:03.147 "vfu_virtio_create_blk_endpoint", 00:05:03.147 "vfu_virtio_delete_endpoint", 00:05:03.147 "iaa_scan_accel_module", 00:05:03.147 "dsa_scan_accel_module", 00:05:03.147 "ioat_scan_accel_module", 00:05:03.147 "accel_error_inject_error", 00:05:03.147 "bdev_iscsi_delete", 00:05:03.147 "bdev_iscsi_create", 00:05:03.147 "bdev_iscsi_set_options", 00:05:03.147 "bdev_virtio_attach_controller", 00:05:03.147 "bdev_virtio_scsi_get_devices", 00:05:03.147 "bdev_virtio_detach_controller", 00:05:03.147 "bdev_virtio_blk_set_hotplug", 00:05:03.147 "bdev_ftl_set_property", 00:05:03.147 "bdev_ftl_get_properties", 00:05:03.147 "bdev_ftl_get_stats", 00:05:03.147 "bdev_ftl_unmap", 00:05:03.147 "bdev_ftl_unload", 00:05:03.147 "bdev_ftl_delete", 00:05:03.147 "bdev_ftl_load", 00:05:03.147 "bdev_ftl_create", 00:05:03.147 "bdev_aio_delete", 00:05:03.147 "bdev_aio_rescan", 00:05:03.147 "bdev_aio_create", 00:05:03.147 "blobfs_create", 00:05:03.147 "blobfs_detect", 00:05:03.147 "blobfs_set_cache_size", 00:05:03.147 "bdev_zone_block_delete", 00:05:03.147 "bdev_zone_block_create", 00:05:03.147 "bdev_delay_delete", 00:05:03.147 "bdev_delay_create", 00:05:03.147 "bdev_delay_update_latency", 00:05:03.147 "bdev_split_delete", 00:05:03.147 "bdev_split_create", 00:05:03.147 "bdev_error_inject_error", 00:05:03.147 "bdev_error_delete", 00:05:03.147 "bdev_error_create", 00:05:03.147 "bdev_raid_set_options", 00:05:03.147 "bdev_raid_remove_base_bdev", 00:05:03.147 "bdev_raid_add_base_bdev", 00:05:03.147 "bdev_raid_delete", 00:05:03.147 "bdev_raid_create", 00:05:03.147 "bdev_raid_get_bdevs", 00:05:03.147 "bdev_lvol_set_parent_bdev", 00:05:03.147 "bdev_lvol_set_parent", 00:05:03.147 "bdev_lvol_check_shallow_copy", 00:05:03.147 "bdev_lvol_start_shallow_copy", 00:05:03.147 "bdev_lvol_grow_lvstore", 00:05:03.147 "bdev_lvol_get_lvols", 00:05:03.147 "bdev_lvol_get_lvstores", 00:05:03.147 "bdev_lvol_delete", 00:05:03.147 "bdev_lvol_set_read_only", 00:05:03.147 "bdev_lvol_resize", 00:05:03.147 "bdev_lvol_decouple_parent", 00:05:03.147 "bdev_lvol_inflate", 00:05:03.147 "bdev_lvol_rename", 00:05:03.147 "bdev_lvol_clone_bdev", 00:05:03.147 "bdev_lvol_clone", 00:05:03.147 "bdev_lvol_snapshot", 00:05:03.147 "bdev_lvol_create", 00:05:03.147 "bdev_lvol_delete_lvstore", 00:05:03.147 "bdev_lvol_rename_lvstore", 00:05:03.147 "bdev_lvol_create_lvstore", 00:05:03.147 "bdev_passthru_delete", 00:05:03.147 "bdev_passthru_create", 00:05:03.147 "bdev_nvme_cuse_unregister", 00:05:03.147 "bdev_nvme_cuse_register", 00:05:03.147 "bdev_opal_new_user", 00:05:03.147 "bdev_opal_set_lock_state", 00:05:03.147 "bdev_opal_delete", 00:05:03.147 "bdev_opal_get_info", 00:05:03.147 "bdev_opal_create", 00:05:03.147 "bdev_nvme_opal_revert", 00:05:03.147 "bdev_nvme_opal_init", 00:05:03.147 "bdev_nvme_send_cmd", 00:05:03.147 "bdev_nvme_set_keys", 00:05:03.147 "bdev_nvme_get_path_iostat", 00:05:03.147 "bdev_nvme_get_mdns_discovery_info", 00:05:03.147 "bdev_nvme_stop_mdns_discovery", 00:05:03.147 "bdev_nvme_start_mdns_discovery", 00:05:03.147 "bdev_nvme_set_multipath_policy", 00:05:03.147 "bdev_nvme_set_preferred_path", 00:05:03.147 "bdev_nvme_get_io_paths", 00:05:03.147 "bdev_nvme_remove_error_injection", 00:05:03.147 "bdev_nvme_add_error_injection", 00:05:03.147 "bdev_nvme_get_discovery_info", 00:05:03.147 "bdev_nvme_stop_discovery", 00:05:03.147 "bdev_nvme_start_discovery", 00:05:03.147 "bdev_nvme_get_controller_health_info", 00:05:03.147 "bdev_nvme_disable_controller", 00:05:03.147 "bdev_nvme_enable_controller", 00:05:03.147 "bdev_nvme_reset_controller", 00:05:03.147 "bdev_nvme_get_transport_statistics", 00:05:03.147 "bdev_nvme_apply_firmware", 00:05:03.147 "bdev_nvme_detach_controller", 00:05:03.147 "bdev_nvme_get_controllers", 00:05:03.147 "bdev_nvme_attach_controller", 00:05:03.147 "bdev_nvme_set_hotplug", 00:05:03.147 "bdev_nvme_set_options", 00:05:03.147 "bdev_null_resize", 00:05:03.147 "bdev_null_delete", 00:05:03.147 "bdev_null_create", 00:05:03.147 "bdev_malloc_delete", 00:05:03.147 "bdev_malloc_create" 00:05:03.147 ] 00:05:03.406 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.406 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.406 17:54:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1467237 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1467237 ']' 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1467237 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467237 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467237' 00:05:03.406 killing process with pid 1467237 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1467237 00:05:03.406 17:54:24 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1467237 00:05:03.665 00:05:03.665 real 0m1.183s 00:05:03.665 user 0m1.938s 00:05:03.665 sys 0m0.477s 00:05:03.665 17:54:25 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.665 17:54:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 ************************************ 00:05:03.665 END TEST spdkcli_tcp 00:05:03.665 ************************************ 00:05:03.665 17:54:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.665 17:54:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.665 17:54:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.665 17:54:25 -- common/autotest_common.sh@10 -- # set +x 00:05:03.665 ************************************ 00:05:03.665 START TEST dpdk_mem_utility 00:05:03.665 ************************************ 00:05:03.665 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.925 * Looking for test storage... 00:05:03.925 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.925 17:54:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.925 --rc genhtml_branch_coverage=1 00:05:03.925 --rc genhtml_function_coverage=1 00:05:03.925 --rc genhtml_legend=1 00:05:03.925 --rc geninfo_all_blocks=1 00:05:03.925 --rc geninfo_unexecuted_blocks=1 00:05:03.925 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.925 ' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.925 --rc genhtml_branch_coverage=1 00:05:03.925 --rc genhtml_function_coverage=1 00:05:03.925 --rc genhtml_legend=1 00:05:03.925 --rc geninfo_all_blocks=1 00:05:03.925 --rc geninfo_unexecuted_blocks=1 00:05:03.925 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.925 ' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.925 --rc genhtml_branch_coverage=1 00:05:03.925 --rc genhtml_function_coverage=1 00:05:03.925 --rc genhtml_legend=1 00:05:03.925 --rc geninfo_all_blocks=1 00:05:03.925 --rc geninfo_unexecuted_blocks=1 00:05:03.925 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.925 ' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.925 --rc genhtml_branch_coverage=1 00:05:03.925 --rc genhtml_function_coverage=1 00:05:03.925 --rc genhtml_legend=1 00:05:03.925 --rc geninfo_all_blocks=1 00:05:03.925 --rc geninfo_unexecuted_blocks=1 00:05:03.925 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:03.925 ' 00:05:03.925 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.925 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1467502 00:05:03.925 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1467502 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1467502 ']' 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.925 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.925 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.925 [2024-10-05 17:54:25.293240] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:03.925 [2024-10-05 17:54:25.293331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467502 ] 00:05:03.925 [2024-10-05 17:54:25.359301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.184 [2024-10-05 17:54:25.436999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.184 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.184 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:04.184 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.184 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.184 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.184 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.184 { 00:05:04.184 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.184 } 00:05:04.184 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.184 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.444 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:04.444 1 heaps totaling size 860.000000 MiB 00:05:04.444 size: 860.000000 MiB heap id: 0 00:05:04.444 end heaps---------- 00:05:04.444 9 mempools totaling size 642.649841 MiB 00:05:04.444 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.444 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.444 size: 92.545471 MiB name: bdev_io_1467502 00:05:04.444 size: 51.011292 MiB name: evtpool_1467502 00:05:04.444 size: 50.003479 MiB name: msgpool_1467502 00:05:04.444 size: 36.509338 MiB name: fsdev_io_1467502 00:05:04.444 size: 21.763794 MiB name: PDU_Pool 00:05:04.444 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.444 size: 0.026123 MiB name: Session_Pool 00:05:04.444 end mempools------- 00:05:04.444 6 memzones totaling size 4.142822 MiB 00:05:04.444 size: 1.000366 MiB name: RG_ring_0_1467502 00:05:04.444 size: 1.000366 MiB name: RG_ring_1_1467502 00:05:04.444 size: 1.000366 MiB name: RG_ring_4_1467502 00:05:04.444 size: 1.000366 MiB name: RG_ring_5_1467502 00:05:04.444 size: 0.125366 MiB name: RG_ring_2_1467502 00:05:04.444 size: 0.015991 MiB name: RG_ring_3_1467502 00:05:04.444 end memzones------- 00:05:04.444 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.444 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:04.444 list of free elements. size: 13.984680 MiB 00:05:04.445 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.445 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:04.445 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:04.445 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:04.445 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:04.445 element at address: 0x20000b200000 with size: 0.959839 MiB 00:05:04.445 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:04.445 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:04.445 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:04.445 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:04.445 element at address: 0x200003e00000 with size: 0.495422 MiB 00:05:04.445 element at address: 0x200007000000 with size: 0.490723 MiB 00:05:04.445 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:04.445 element at address: 0x200013800000 with size: 0.481934 MiB 00:05:04.445 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:04.445 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:04.445 list of standard malloc elements. size: 199.218628 MiB 00:05:04.445 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:04.445 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:04.445 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:04.445 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:04.445 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:04.445 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.445 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:04.445 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.445 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:04.445 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003a5b100 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003adb3c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003adb5c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003adf880 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20000707da00 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20000707dac0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001387b600 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001387b6c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x2000138fb980 with size: 0.000183 MiB 00:05:04.445 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:04.445 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:04.445 list of memzone associated elements. size: 646.796692 MiB 00:05:04.445 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:04.445 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.445 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:04.445 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.445 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:04.445 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1467502_0 00:05:04.445 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.445 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1467502_0 00:05:04.445 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.445 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1467502_0 00:05:04.445 element at address: 0x2000139fdb80 with size: 36.008911 MiB 00:05:04.445 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1467502_0 00:05:04.445 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:04.445 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.445 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:04.445 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.445 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.445 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1467502 00:05:04.445 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.445 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1467502 00:05:04.445 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.445 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1467502 00:05:04.445 element at address: 0x2000138fba40 with size: 1.008118 MiB 00:05:04.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.445 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:04.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.445 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:04.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.445 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:04.445 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.445 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.445 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1467502 00:05:04.445 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.445 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1467502 00:05:04.445 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:04.445 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1467502 00:05:04.445 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:04.445 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1467502 00:05:04.445 element at address: 0x200003a5b1c0 with size: 0.500488 MiB 00:05:04.445 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1467502 00:05:04.445 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:04.445 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1467502 00:05:04.445 element at address: 0x20001387b780 with size: 0.500488 MiB 00:05:04.445 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.445 element at address: 0x20000707db80 with size: 0.500488 MiB 00:05:04.445 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.445 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:04.445 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.446 element at address: 0x200003adf940 with size: 0.125488 MiB 00:05:04.446 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1467502 00:05:04.446 element at address: 0x20000b2f5b80 with size: 0.031738 MiB 00:05:04.446 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.446 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:04.446 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.446 element at address: 0x200003adb680 with size: 0.016113 MiB 00:05:04.446 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1467502 00:05:04.446 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:04.446 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.446 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:04.446 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1467502 00:05:04.446 element at address: 0x200003adb480 with size: 0.000305 MiB 00:05:04.446 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1467502 00:05:04.446 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:05:04.446 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1467502 00:05:04.446 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:04.446 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.446 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.446 17:54:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1467502 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1467502 ']' 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1467502 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467502 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467502' 00:05:04.446 killing process with pid 1467502 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1467502 00:05:04.446 17:54:25 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1467502 00:05:04.705 00:05:04.705 real 0m1.023s 00:05:04.705 user 0m0.929s 00:05:04.705 sys 0m0.439s 00:05:04.705 17:54:26 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.705 17:54:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.705 ************************************ 00:05:04.705 END TEST dpdk_mem_utility 00:05:04.705 ************************************ 00:05:04.705 17:54:26 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:04.965 17:54:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.965 17:54:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.965 17:54:26 -- common/autotest_common.sh@10 -- # set +x 00:05:04.965 ************************************ 00:05:04.965 START TEST event 00:05:04.965 ************************************ 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:04.965 * Looking for test storage... 00:05:04.965 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.965 17:54:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.965 17:54:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.965 17:54:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.965 17:54:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.965 17:54:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.965 17:54:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.965 17:54:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.965 17:54:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.965 17:54:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.965 17:54:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.965 17:54:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.965 17:54:26 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.965 17:54:26 event -- scripts/common.sh@345 -- # : 1 00:05:04.965 17:54:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.965 17:54:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.965 17:54:26 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.965 17:54:26 event -- scripts/common.sh@353 -- # local d=1 00:05:04.965 17:54:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.965 17:54:26 event -- scripts/common.sh@355 -- # echo 1 00:05:04.965 17:54:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.965 17:54:26 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.965 17:54:26 event -- scripts/common.sh@353 -- # local d=2 00:05:04.965 17:54:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.965 17:54:26 event -- scripts/common.sh@355 -- # echo 2 00:05:04.965 17:54:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.965 17:54:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.965 17:54:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.965 17:54:26 event -- scripts/common.sh@368 -- # return 0 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.965 --rc genhtml_branch_coverage=1 00:05:04.965 --rc genhtml_function_coverage=1 00:05:04.965 --rc genhtml_legend=1 00:05:04.965 --rc geninfo_all_blocks=1 00:05:04.965 --rc geninfo_unexecuted_blocks=1 00:05:04.965 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.965 ' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.965 --rc genhtml_branch_coverage=1 00:05:04.965 --rc genhtml_function_coverage=1 00:05:04.965 --rc genhtml_legend=1 00:05:04.965 --rc geninfo_all_blocks=1 00:05:04.965 --rc geninfo_unexecuted_blocks=1 00:05:04.965 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.965 ' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.965 --rc genhtml_branch_coverage=1 00:05:04.965 --rc genhtml_function_coverage=1 00:05:04.965 --rc genhtml_legend=1 00:05:04.965 --rc geninfo_all_blocks=1 00:05:04.965 --rc geninfo_unexecuted_blocks=1 00:05:04.965 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.965 ' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.965 --rc genhtml_branch_coverage=1 00:05:04.965 --rc genhtml_function_coverage=1 00:05:04.965 --rc genhtml_legend=1 00:05:04.965 --rc geninfo_all_blocks=1 00:05:04.965 --rc geninfo_unexecuted_blocks=1 00:05:04.965 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:04.965 ' 00:05:04.965 17:54:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.965 17:54:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.965 17:54:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:04.965 17:54:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.965 17:54:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.965 ************************************ 00:05:04.965 START TEST event_perf 00:05:04.965 ************************************ 00:05:04.965 17:54:26 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.965 Running I/O for 1 seconds...[2024-10-05 17:54:26.425605] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:04.965 [2024-10-05 17:54:26.425647] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467659 ] 00:05:05.224 [2024-10-05 17:54:26.492336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.224 [2024-10-05 17:54:26.568365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.224 [2024-10-05 17:54:26.568384] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.224 [2024-10-05 17:54:26.568467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.224 [2024-10-05 17:54:26.568468] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.603 Running I/O for 1 seconds... 00:05:06.603 lcore 0: 194142 00:05:06.603 lcore 1: 194141 00:05:06.603 lcore 2: 194141 00:05:06.603 lcore 3: 194142 00:05:06.603 done. 00:05:06.603 00:05:06.603 real 0m1.215s 00:05:06.603 user 0m4.127s 00:05:06.603 sys 0m0.085s 00:05:06.603 17:54:27 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.603 17:54:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.603 ************************************ 00:05:06.603 END TEST event_perf 00:05:06.603 ************************************ 00:05:06.603 17:54:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.603 17:54:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:06.603 17:54:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.603 17:54:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.603 ************************************ 00:05:06.603 START TEST event_reactor 00:05:06.603 ************************************ 00:05:06.603 17:54:27 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.603 [2024-10-05 17:54:27.728831] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:06.603 [2024-10-05 17:54:27.728936] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467944 ] 00:05:06.603 [2024-10-05 17:54:27.806851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.603 [2024-10-05 17:54:27.879752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.539 test_start 00:05:07.539 oneshot 00:05:07.539 tick 100 00:05:07.539 tick 100 00:05:07.539 tick 250 00:05:07.539 tick 100 00:05:07.539 tick 100 00:05:07.539 tick 100 00:05:07.539 tick 250 00:05:07.539 tick 500 00:05:07.539 tick 100 00:05:07.539 tick 100 00:05:07.539 tick 250 00:05:07.539 tick 100 00:05:07.539 tick 100 00:05:07.539 test_end 00:05:07.539 00:05:07.539 real 0m1.232s 00:05:07.539 user 0m1.137s 00:05:07.539 sys 0m0.091s 00:05:07.539 17:54:28 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.539 17:54:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.539 ************************************ 00:05:07.539 END TEST event_reactor 00:05:07.539 ************************************ 00:05:07.539 17:54:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.539 17:54:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:07.539 17:54:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.539 17:54:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.798 ************************************ 00:05:07.798 START TEST event_reactor_perf 00:05:07.798 ************************************ 00:05:07.798 17:54:29 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.798 [2024-10-05 17:54:29.029353] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:07.798 [2024-10-05 17:54:29.029440] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468229 ] 00:05:07.798 [2024-10-05 17:54:29.099623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.798 [2024-10-05 17:54:29.170447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.173 test_start 00:05:09.173 test_end 00:05:09.173 Performance: 982254 events per second 00:05:09.173 00:05:09.173 real 0m1.220s 00:05:09.173 user 0m1.135s 00:05:09.173 sys 0m0.081s 00:05:09.173 17:54:30 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.173 17:54:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.173 ************************************ 00:05:09.173 END TEST event_reactor_perf 00:05:09.173 ************************************ 00:05:09.173 17:54:30 event -- event/event.sh@49 -- # uname -s 00:05:09.173 17:54:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.174 17:54:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.174 17:54:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.174 17:54:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.174 17:54:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.174 ************************************ 00:05:09.174 START TEST event_scheduler 00:05:09.174 ************************************ 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.174 * Looking for test storage... 00:05:09.174 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.174 17:54:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.174 --rc genhtml_branch_coverage=1 00:05:09.174 --rc genhtml_function_coverage=1 00:05:09.174 --rc genhtml_legend=1 00:05:09.174 --rc geninfo_all_blocks=1 00:05:09.174 --rc geninfo_unexecuted_blocks=1 00:05:09.174 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.174 ' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.174 --rc genhtml_branch_coverage=1 00:05:09.174 --rc genhtml_function_coverage=1 00:05:09.174 --rc genhtml_legend=1 00:05:09.174 --rc geninfo_all_blocks=1 00:05:09.174 --rc geninfo_unexecuted_blocks=1 00:05:09.174 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.174 ' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.174 --rc genhtml_branch_coverage=1 00:05:09.174 --rc genhtml_function_coverage=1 00:05:09.174 --rc genhtml_legend=1 00:05:09.174 --rc geninfo_all_blocks=1 00:05:09.174 --rc geninfo_unexecuted_blocks=1 00:05:09.174 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.174 ' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.174 --rc genhtml_branch_coverage=1 00:05:09.174 --rc genhtml_function_coverage=1 00:05:09.174 --rc genhtml_legend=1 00:05:09.174 --rc geninfo_all_blocks=1 00:05:09.174 --rc geninfo_unexecuted_blocks=1 00:05:09.174 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:09.174 ' 00:05:09.174 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.174 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1468551 00:05:09.174 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.174 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1468551 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1468551 ']' 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.174 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.174 17:54:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.174 [2024-10-05 17:54:30.488539] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:09.174 [2024-10-05 17:54:30.488630] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468551 ] 00:05:09.174 [2024-10-05 17:54:30.552022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.174 [2024-10-05 17:54:30.626767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.174 [2024-10-05 17:54:30.626855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.174 [2024-10-05 17:54:30.626939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.174 [2024-10-05 17:54:30.626941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:09.433 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.433 [2024-10-05 17:54:30.675536] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.433 [2024-10-05 17:54:30.675557] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.433 [2024-10-05 17:54:30.675568] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.433 [2024-10-05 17:54:30.675576] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.433 [2024-10-05 17:54:30.675583] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.433 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.433 [2024-10-05 17:54:30.748364] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.433 17:54:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.433 17:54:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.433 ************************************ 00:05:09.433 START TEST scheduler_create_thread 00:05:09.433 ************************************ 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.433 2 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.433 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 3 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 4 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 5 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 6 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 7 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 8 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 9 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 10 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.434 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.692 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.693 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.693 17:54:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.693 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.693 17:54:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.648 17:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.648 17:54:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.648 17:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.648 17:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.045 17:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.045 17:54:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:12.045 17:54:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:12.045 17:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.045 17:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.979 17:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.979 00:05:12.979 real 0m3.381s 00:05:12.979 user 0m0.021s 00:05:12.979 sys 0m0.006s 00:05:12.979 17:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.979 17:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.979 ************************************ 00:05:12.979 END TEST scheduler_create_thread 00:05:12.979 ************************************ 00:05:12.979 17:54:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.979 17:54:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1468551 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1468551 ']' 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1468551 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468551 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468551' 00:05:12.979 killing process with pid 1468551 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1468551 00:05:12.979 17:54:34 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1468551 00:05:13.237 [2024-10-05 17:54:34.552241] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.496 00:05:13.496 real 0m4.461s 00:05:13.496 user 0m7.780s 00:05:13.496 sys 0m0.413s 00:05:13.496 17:54:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.496 17:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.496 ************************************ 00:05:13.496 END TEST event_scheduler 00:05:13.496 ************************************ 00:05:13.496 17:54:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.496 17:54:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.496 17:54:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.496 17:54:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.496 17:54:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.496 ************************************ 00:05:13.496 START TEST app_repeat 00:05:13.496 ************************************ 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1469396 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1469396' 00:05:13.496 Process app_repeat pid: 1469396 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.496 spdk_app_start Round 0 00:05:13.496 17:54:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1469396 /var/tmp/spdk-nbd.sock 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1469396 ']' 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.496 17:54:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.496 [2024-10-05 17:54:34.893335] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:13.496 [2024-10-05 17:54:34.893396] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469396 ] 00:05:13.754 [2024-10-05 17:54:34.962159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.754 [2024-10-05 17:54:35.034585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.754 [2024-10-05 17:54:35.034587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.754 17:54:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.754 17:54:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:13.754 17:54:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.012 Malloc0 00:05:14.012 17:54:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.271 Malloc1 00:05:14.271 17:54:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.271 /dev/nbd0 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.271 17:54:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.271 1+0 records in 00:05:14.271 1+0 records out 00:05:14.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157583 s, 26.0 MB/s 00:05:14.271 17:54:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.529 /dev/nbd1 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.529 1+0 records in 00:05:14.529 1+0 records out 00:05:14.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248366 s, 16.5 MB/s 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.529 17:54:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.529 17:54:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.787 17:54:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.787 { 00:05:14.787 "nbd_device": "/dev/nbd0", 00:05:14.787 "bdev_name": "Malloc0" 00:05:14.787 }, 00:05:14.787 { 00:05:14.787 "nbd_device": "/dev/nbd1", 00:05:14.787 "bdev_name": "Malloc1" 00:05:14.787 } 00:05:14.787 ]' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.787 { 00:05:14.787 "nbd_device": "/dev/nbd0", 00:05:14.787 "bdev_name": "Malloc0" 00:05:14.787 }, 00:05:14.787 { 00:05:14.787 "nbd_device": "/dev/nbd1", 00:05:14.787 "bdev_name": "Malloc1" 00:05:14.787 } 00:05:14.787 ]' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.787 /dev/nbd1' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.787 /dev/nbd1' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.787 256+0 records in 00:05:14.787 256+0 records out 00:05:14.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115227 s, 91.0 MB/s 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.787 17:54:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.045 256+0 records in 00:05:15.045 256+0 records out 00:05:15.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196527 s, 53.4 MB/s 00:05:15.045 17:54:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.045 17:54:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.045 256+0 records in 00:05:15.045 256+0 records out 00:05:15.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211539 s, 49.6 MB/s 00:05:15.045 17:54:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.045 17:54:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.046 17:54:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.304 17:54:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.562 17:54:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.562 17:54:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.820 17:54:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.077 [2024-10-05 17:54:37.351136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.077 [2024-10-05 17:54:37.419001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.077 [2024-10-05 17:54:37.419002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.077 [2024-10-05 17:54:37.460266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.077 [2024-10-05 17:54:37.460310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.388 spdk_app_start Round 1 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1469396 /var/tmp/spdk-nbd.sock 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1469396 ']' 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.388 17:54:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.388 Malloc0 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.388 Malloc1 00:05:19.388 17:54:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.388 17:54:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.647 /dev/nbd0 00:05:19.647 17:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.647 17:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:19.647 17:54:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.647 1+0 records in 00:05:19.647 1+0 records out 00:05:19.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219262 s, 18.7 MB/s 00:05:19.647 17:54:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.647 17:54:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:19.647 17:54:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.647 17:54:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:19.647 17:54:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:19.647 17:54:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.647 17:54:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.647 17:54:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.905 /dev/nbd1 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.905 1+0 records in 00:05:19.905 1+0 records out 00:05:19.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183481 s, 22.3 MB/s 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:19.905 17:54:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.905 17:54:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.164 { 00:05:20.164 "nbd_device": "/dev/nbd0", 00:05:20.164 "bdev_name": "Malloc0" 00:05:20.164 }, 00:05:20.164 { 00:05:20.164 "nbd_device": "/dev/nbd1", 00:05:20.164 "bdev_name": "Malloc1" 00:05:20.164 } 00:05:20.164 ]' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.164 { 00:05:20.164 "nbd_device": "/dev/nbd0", 00:05:20.164 "bdev_name": "Malloc0" 00:05:20.164 }, 00:05:20.164 { 00:05:20.164 "nbd_device": "/dev/nbd1", 00:05:20.164 "bdev_name": "Malloc1" 00:05:20.164 } 00:05:20.164 ]' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.164 /dev/nbd1' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.164 /dev/nbd1' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.164 256+0 records in 00:05:20.164 256+0 records out 00:05:20.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115375 s, 90.9 MB/s 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.164 256+0 records in 00:05:20.164 256+0 records out 00:05:20.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019843 s, 52.8 MB/s 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.164 256+0 records in 00:05:20.164 256+0 records out 00:05:20.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213143 s, 49.2 MB/s 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.164 17:54:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.423 17:54:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.681 17:54:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.938 17:54:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.938 17:54:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.197 17:54:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.197 [2024-10-05 17:54:42.640427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.456 [2024-10-05 17:54:42.706389] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.456 [2024-10-05 17:54:42.706391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.456 [2024-10-05 17:54:42.747477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.456 [2024-10-05 17:54:42.747519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.736 17:54:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.736 17:54:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.736 spdk_app_start Round 2 00:05:24.736 17:54:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1469396 /var/tmp/spdk-nbd.sock 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1469396 ']' 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.736 17:54:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:24.736 17:54:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.736 Malloc0 00:05:24.736 17:54:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.736 Malloc1 00:05:24.736 17:54:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.736 17:54:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.995 /dev/nbd0 00:05:24.995 17:54:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.995 17:54:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.995 1+0 records in 00:05:24.995 1+0 records out 00:05:24.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219429 s, 18.7 MB/s 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.995 17:54:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.995 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.995 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.995 17:54:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.253 /dev/nbd1 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.253 1+0 records in 00:05:25.253 1+0 records out 00:05:25.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261259 s, 15.7 MB/s 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.253 17:54:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.253 { 00:05:25.253 "nbd_device": "/dev/nbd0", 00:05:25.253 "bdev_name": "Malloc0" 00:05:25.253 }, 00:05:25.253 { 00:05:25.253 "nbd_device": "/dev/nbd1", 00:05:25.253 "bdev_name": "Malloc1" 00:05:25.253 } 00:05:25.253 ]' 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.253 { 00:05:25.253 "nbd_device": "/dev/nbd0", 00:05:25.253 "bdev_name": "Malloc0" 00:05:25.253 }, 00:05:25.253 { 00:05:25.253 "nbd_device": "/dev/nbd1", 00:05:25.253 "bdev_name": "Malloc1" 00:05:25.253 } 00:05:25.253 ]' 00:05:25.253 17:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.512 /dev/nbd1' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.512 /dev/nbd1' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.512 256+0 records in 00:05:25.512 256+0 records out 00:05:25.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103279 s, 102 MB/s 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.512 256+0 records in 00:05:25.512 256+0 records out 00:05:25.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197698 s, 53.0 MB/s 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.512 256+0 records in 00:05:25.512 256+0 records out 00:05:25.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212258 s, 49.4 MB/s 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.512 17:54:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.770 17:54:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.770 17:54:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.770 17:54:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.770 17:54:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.771 17:54:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.029 17:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.288 17:54:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.288 17:54:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.288 17:54:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.547 [2024-10-05 17:54:47.884371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.547 [2024-10-05 17:54:47.949943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.547 [2024-10-05 17:54:47.949945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.547 [2024-10-05 17:54:47.991119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.547 [2024-10-05 17:54:47.991164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.827 17:54:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1469396 /var/tmp/spdk-nbd.sock 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1469396 ']' 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.827 17:54:50 event.app_repeat -- event/event.sh@39 -- # killprocess 1469396 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1469396 ']' 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1469396 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469396 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469396' 00:05:29.827 killing process with pid 1469396 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1469396 00:05:29.827 17:54:50 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1469396 00:05:29.827 spdk_app_start is called in Round 0. 00:05:29.827 Shutdown signal received, stop current app iteration 00:05:29.827 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:29.827 spdk_app_start is called in Round 1. 00:05:29.827 Shutdown signal received, stop current app iteration 00:05:29.827 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:29.827 spdk_app_start is called in Round 2. 00:05:29.827 Shutdown signal received, stop current app iteration 00:05:29.827 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:29.827 spdk_app_start is called in Round 3. 00:05:29.827 Shutdown signal received, stop current app iteration 00:05:29.827 17:54:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.827 17:54:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.827 00:05:29.827 real 0m16.247s 00:05:29.827 user 0m34.672s 00:05:29.827 sys 0m3.183s 00:05:29.827 17:54:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.827 17:54:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.827 ************************************ 00:05:29.827 END TEST app_repeat 00:05:29.827 ************************************ 00:05:29.827 17:54:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.827 17:54:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.828 17:54:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.828 17:54:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.828 17:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.828 ************************************ 00:05:29.828 START TEST cpu_locks 00:05:29.828 ************************************ 00:05:29.828 17:54:51 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.828 * Looking for test storage... 00:05:30.087 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.087 17:54:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.087 --rc genhtml_branch_coverage=1 00:05:30.087 --rc genhtml_function_coverage=1 00:05:30.087 --rc genhtml_legend=1 00:05:30.087 --rc geninfo_all_blocks=1 00:05:30.087 --rc geninfo_unexecuted_blocks=1 00:05:30.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:30.087 ' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.087 --rc genhtml_branch_coverage=1 00:05:30.087 --rc genhtml_function_coverage=1 00:05:30.087 --rc genhtml_legend=1 00:05:30.087 --rc geninfo_all_blocks=1 00:05:30.087 --rc geninfo_unexecuted_blocks=1 00:05:30.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:30.087 ' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.087 --rc genhtml_branch_coverage=1 00:05:30.087 --rc genhtml_function_coverage=1 00:05:30.087 --rc genhtml_legend=1 00:05:30.087 --rc geninfo_all_blocks=1 00:05:30.087 --rc geninfo_unexecuted_blocks=1 00:05:30.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:30.087 ' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.087 --rc genhtml_branch_coverage=1 00:05:30.087 --rc genhtml_function_coverage=1 00:05:30.087 --rc genhtml_legend=1 00:05:30.087 --rc geninfo_all_blocks=1 00:05:30.087 --rc geninfo_unexecuted_blocks=1 00:05:30.087 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:30.087 ' 00:05:30.087 17:54:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:30.087 17:54:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:30.087 17:54:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:30.087 17:54:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.087 17:54:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.087 ************************************ 00:05:30.087 START TEST default_locks 00:05:30.087 ************************************ 00:05:30.087 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:30.087 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.087 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1472448 00:05:30.087 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1472448 00:05:30.087 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1472448 ']' 00:05:30.088 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.088 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.088 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.088 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.088 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.088 [2024-10-05 17:54:51.418581] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:30.088 [2024-10-05 17:54:51.418623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472448 ] 00:05:30.088 [2024-10-05 17:54:51.483624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.346 [2024-10-05 17:54:51.563268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.346 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.346 17:54:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:30.346 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1472448 00:05:30.346 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1472448 00:05:30.346 17:54:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.911 lslocks: write error 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1472448 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1472448 ']' 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1472448 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.911 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472448 00:05:31.170 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.170 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.170 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472448' 00:05:31.170 killing process with pid 1472448 00:05:31.170 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1472448 00:05:31.170 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1472448 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1472448 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1472448 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1472448 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1472448 ']' 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.429 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1472448) - No such process 00:05:31.429 ERROR: process (pid: 1472448) is no longer running 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.429 00:05:31.429 real 0m1.324s 00:05:31.429 user 0m1.318s 00:05:31.429 sys 0m0.579s 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.429 17:54:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.429 ************************************ 00:05:31.429 END TEST default_locks 00:05:31.429 ************************************ 00:05:31.429 17:54:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.429 17:54:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.429 17:54:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.429 17:54:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.429 ************************************ 00:05:31.429 START TEST default_locks_via_rpc 00:05:31.429 ************************************ 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1472684 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1472684 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1472684 ']' 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.429 17:54:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.429 [2024-10-05 17:54:52.826603] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:31.429 [2024-10-05 17:54:52.826661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472684 ] 00:05:31.688 [2024-10-05 17:54:52.894555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.688 [2024-10-05 17:54:52.972540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1472684 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1472684 00:05:31.946 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1472684 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1472684 ']' 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1472684 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472684 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472684' 00:05:32.204 killing process with pid 1472684 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1472684 00:05:32.204 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1472684 00:05:32.463 00:05:32.463 real 0m1.038s 00:05:32.463 user 0m1.002s 00:05:32.463 sys 0m0.464s 00:05:32.463 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.463 17:54:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.463 ************************************ 00:05:32.463 END TEST default_locks_via_rpc 00:05:32.463 ************************************ 00:05:32.463 17:54:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.463 17:54:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.463 17:54:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.463 17:54:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.463 ************************************ 00:05:32.463 START TEST non_locking_app_on_locked_coremask 00:05:32.463 ************************************ 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1472900 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1472900 /var/tmp/spdk.sock 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1472900 ']' 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.463 17:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.725 [2024-10-05 17:54:53.943840] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:32.725 [2024-10-05 17:54:53.943895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472900 ] 00:05:32.725 [2024-10-05 17:54:54.011050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.725 [2024-10-05 17:54:54.087809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1472946 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1472946 /var/tmp/spdk2.sock 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1472946 ']' 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.985 17:54:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.985 [2024-10-05 17:54:54.311677] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:32.985 [2024-10-05 17:54:54.311730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472946 ] 00:05:32.985 [2024-10-05 17:54:54.402218] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.985 [2024-10-05 17:54:54.402247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.244 [2024-10-05 17:54:54.555717] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.810 17:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.810 17:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.810 17:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1472900 00:05:33.810 17:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1472900 00:05:33.810 17:54:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.184 lslocks: write error 00:05:35.184 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1472900 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1472900 ']' 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1472900 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472900 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472900' 00:05:35.185 killing process with pid 1472900 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1472900 00:05:35.185 17:54:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1472900 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1472946 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1472946 ']' 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1472946 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472946 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472946' 00:05:35.753 killing process with pid 1472946 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1472946 00:05:35.753 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1472946 00:05:36.011 00:05:36.011 real 0m3.527s 00:05:36.011 user 0m3.668s 00:05:36.011 sys 0m1.300s 00:05:36.011 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.012 17:54:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.012 ************************************ 00:05:36.012 END TEST non_locking_app_on_locked_coremask 00:05:36.012 ************************************ 00:05:36.270 17:54:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.270 17:54:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.270 17:54:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.270 17:54:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.270 ************************************ 00:05:36.270 START TEST locking_app_on_unlocked_coremask 00:05:36.270 ************************************ 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1473588 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1473588 /var/tmp/spdk.sock 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1473588 ']' 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.270 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.270 [2024-10-05 17:54:57.544265] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:36.270 [2024-10-05 17:54:57.544330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473588 ] 00:05:36.270 [2024-10-05 17:54:57.611942] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.270 [2024-10-05 17:54:57.611970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.270 [2024-10-05 17:54:57.688554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1473724 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1473724 /var/tmp/spdk2.sock 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1473724 ']' 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.528 17:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.528 [2024-10-05 17:54:57.917599] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:36.528 [2024-10-05 17:54:57.917663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473724 ] 00:05:36.787 [2024-10-05 17:54:58.009250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.787 [2024-10-05 17:54:58.154199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.354 17:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.354 17:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:37.354 17:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1473724 00:05:37.354 17:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1473724 00:05:37.354 17:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.727 lslocks: write error 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1473588 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1473588 ']' 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1473588 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473588 00:05:38.727 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.728 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.728 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473588' 00:05:38.728 killing process with pid 1473588 00:05:38.728 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1473588 00:05:38.728 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1473588 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1473724 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1473724 ']' 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1473724 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473724 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473724' 00:05:39.714 killing process with pid 1473724 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1473724 00:05:39.714 17:55:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1473724 00:05:39.996 00:05:39.996 real 0m3.664s 00:05:39.996 user 0m3.835s 00:05:39.996 sys 0m1.304s 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 ************************************ 00:05:39.996 END TEST locking_app_on_unlocked_coremask 00:05:39.996 ************************************ 00:05:39.996 17:55:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.996 17:55:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.996 17:55:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.996 17:55:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 ************************************ 00:05:39.996 START TEST locking_app_on_locked_coremask 00:05:39.996 ************************************ 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1474303 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1474303 /var/tmp/spdk.sock 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1474303 ']' 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.996 17:55:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 [2024-10-05 17:55:01.291511] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:39.996 [2024-10-05 17:55:01.291589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474303 ] 00:05:39.996 [2024-10-05 17:55:01.357015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.996 [2024-10-05 17:55:01.432480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1474419 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1474419 /var/tmp/spdk2.sock 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1474419 /var/tmp/spdk2.sock 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1474419 /var/tmp/spdk2.sock 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1474419 ']' 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.932 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.932 [2024-10-05 17:55:02.163765] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:40.933 [2024-10-05 17:55:02.163830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474419 ] 00:05:40.933 [2024-10-05 17:55:02.255528] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1474303 has claimed it. 00:05:40.933 [2024-10-05 17:55:02.255566] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.500 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1474419) - No such process 00:05:41.500 ERROR: process (pid: 1474419) is no longer running 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1474303 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1474303 00:05:41.500 17:55:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.066 lslocks: write error 00:05:42.066 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1474303 00:05:42.066 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1474303 ']' 00:05:42.066 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1474303 00:05:42.067 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.067 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.067 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1474303 00:05:42.325 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.325 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.325 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1474303' 00:05:42.325 killing process with pid 1474303 00:05:42.325 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1474303 00:05:42.325 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1474303 00:05:42.584 00:05:42.584 real 0m2.600s 00:05:42.584 user 0m2.845s 00:05:42.584 sys 0m0.854s 00:05:42.584 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.584 17:55:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.584 ************************************ 00:05:42.584 END TEST locking_app_on_locked_coremask 00:05:42.584 ************************************ 00:05:42.584 17:55:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.584 17:55:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.584 17:55:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.584 17:55:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.584 ************************************ 00:05:42.584 START TEST locking_overlapped_coremask 00:05:42.584 ************************************ 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1474859 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1474859 /var/tmp/spdk.sock 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1474859 ']' 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.584 17:55:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.584 [2024-10-05 17:55:03.975082] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:42.584 [2024-10-05 17:55:03.975143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474859 ] 00:05:42.584 [2024-10-05 17:55:04.040846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.842 [2024-10-05 17:55:04.121111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.842 [2024-10-05 17:55:04.121213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.842 [2024-10-05 17:55:04.121221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1474877 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1474877 /var/tmp/spdk2.sock 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1474877 /var/tmp/spdk2.sock 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1474877 /var/tmp/spdk2.sock 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1474877 ']' 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.101 17:55:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.101 [2024-10-05 17:55:04.367715] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:43.101 [2024-10-05 17:55:04.367781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474877 ] 00:05:43.101 [2024-10-05 17:55:04.459229] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1474859 has claimed it. 00:05:43.101 [2024-10-05 17:55:04.459273] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.666 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1474877) - No such process 00:05:43.666 ERROR: process (pid: 1474877) is no longer running 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1474859 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1474859 ']' 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1474859 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1474859 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1474859' 00:05:43.667 killing process with pid 1474859 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1474859 00:05:43.667 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1474859 00:05:44.233 00:05:44.233 real 0m1.467s 00:05:44.233 user 0m3.950s 00:05:44.233 sys 0m0.426s 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.233 ************************************ 00:05:44.233 END TEST locking_overlapped_coremask 00:05:44.233 ************************************ 00:05:44.233 17:55:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.233 17:55:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.233 17:55:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.233 17:55:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.233 ************************************ 00:05:44.233 START TEST locking_overlapped_coremask_via_rpc 00:05:44.233 ************************************ 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1475166 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1475166 /var/tmp/spdk.sock 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1475166 ']' 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.233 17:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.233 [2024-10-05 17:55:05.516148] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:44.233 [2024-10-05 17:55:05.516207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475166 ] 00:05:44.233 [2024-10-05 17:55:05.583879] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.233 [2024-10-05 17:55:05.583903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.233 [2024-10-05 17:55:05.662119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.233 [2024-10-05 17:55:05.662220] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.234 [2024-10-05 17:55:05.662224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1475188 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1475188 /var/tmp/spdk2.sock 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1475188 ']' 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.168 17:55:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.168 [2024-10-05 17:55:06.395595] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:45.168 [2024-10-05 17:55:06.395658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475188 ] 00:05:45.168 [2024-10-05 17:55:06.484855] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.168 [2024-10-05 17:55:06.484877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.427 [2024-10-05 17:55:06.636627] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.427 [2024-10-05 17:55:06.640230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.427 [2024-10-05 17:55:06.640231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.993 [2024-10-05 17:55:07.262245] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1475166 has claimed it. 00:05:45.993 request: 00:05:45.993 { 00:05:45.993 "method": "framework_enable_cpumask_locks", 00:05:45.993 "req_id": 1 00:05:45.993 } 00:05:45.993 Got JSON-RPC error response 00:05:45.993 response: 00:05:45.993 { 00:05:45.993 "code": -32603, 00:05:45.993 "message": "Failed to claim CPU core: 2" 00:05:45.993 } 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1475166 /var/tmp/spdk.sock 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1475166 ']' 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.993 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1475188 /var/tmp/spdk2.sock 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1475188 ']' 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.251 00:05:46.251 real 0m2.178s 00:05:46.251 user 0m0.892s 00:05:46.251 sys 0m0.215s 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.251 17:55:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.251 ************************************ 00:05:46.251 END TEST locking_overlapped_coremask_via_rpc 00:05:46.251 ************************************ 00:05:46.251 17:55:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.251 17:55:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1475166 ]] 00:05:46.251 17:55:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1475166 00:05:46.251 17:55:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1475166 ']' 00:05:46.251 17:55:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1475166 00:05:46.251 17:55:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1475166 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1475166' 00:05:46.510 killing process with pid 1475166 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1475166 00:05:46.510 17:55:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1475166 00:05:46.768 17:55:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1475188 ]] 00:05:46.768 17:55:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1475188 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1475188 ']' 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1475188 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1475188 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:46.768 17:55:08 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1475188' 00:05:46.769 killing process with pid 1475188 00:05:46.769 17:55:08 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1475188 00:05:46.769 17:55:08 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1475188 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1475166 ]] 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1475166 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1475166 ']' 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1475166 00:05:47.335 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1475166) - No such process 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1475166 is not found' 00:05:47.335 Process with pid 1475166 is not found 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1475188 ]] 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1475188 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1475188 ']' 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1475188 00:05:47.335 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1475188) - No such process 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1475188 is not found' 00:05:47.335 Process with pid 1475188 is not found 00:05:47.335 17:55:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.335 00:05:47.335 real 0m17.310s 00:05:47.335 user 0m28.757s 00:05:47.335 sys 0m6.203s 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.335 17:55:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.335 ************************************ 00:05:47.335 END TEST cpu_locks 00:05:47.335 ************************************ 00:05:47.335 00:05:47.336 real 0m42.344s 00:05:47.336 user 1m17.879s 00:05:47.336 sys 0m10.487s 00:05:47.336 17:55:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.336 17:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.336 ************************************ 00:05:47.336 END TEST event 00:05:47.336 ************************************ 00:05:47.336 17:55:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:47.336 17:55:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.336 17:55:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.336 17:55:08 -- common/autotest_common.sh@10 -- # set +x 00:05:47.336 ************************************ 00:05:47.336 START TEST thread 00:05:47.336 ************************************ 00:05:47.336 17:55:08 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:47.336 * Looking for test storage... 00:05:47.336 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:47.336 17:55:08 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.336 17:55:08 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.336 17:55:08 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.336 17:55:08 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.336 17:55:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.336 17:55:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.336 17:55:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.336 17:55:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.336 17:55:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.336 17:55:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.336 17:55:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.336 17:55:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.336 17:55:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.336 17:55:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.336 17:55:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.336 17:55:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:47.336 17:55:08 thread -- scripts/common.sh@345 -- # : 1 00:05:47.336 17:55:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.336 17:55:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.336 17:55:08 thread -- scripts/common.sh@365 -- # decimal 1 00:05:47.336 17:55:08 thread -- scripts/common.sh@353 -- # local d=1 00:05:47.336 17:55:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.336 17:55:08 thread -- scripts/common.sh@355 -- # echo 1 00:05:47.336 17:55:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.336 17:55:08 thread -- scripts/common.sh@366 -- # decimal 2 00:05:47.595 17:55:08 thread -- scripts/common.sh@353 -- # local d=2 00:05:47.595 17:55:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.595 17:55:08 thread -- scripts/common.sh@355 -- # echo 2 00:05:47.595 17:55:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.595 17:55:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.595 17:55:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.595 17:55:08 thread -- scripts/common.sh@368 -- # return 0 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.595 --rc genhtml_branch_coverage=1 00:05:47.595 --rc genhtml_function_coverage=1 00:05:47.595 --rc genhtml_legend=1 00:05:47.595 --rc geninfo_all_blocks=1 00:05:47.595 --rc geninfo_unexecuted_blocks=1 00:05:47.595 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.595 ' 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.595 --rc genhtml_branch_coverage=1 00:05:47.595 --rc genhtml_function_coverage=1 00:05:47.595 --rc genhtml_legend=1 00:05:47.595 --rc geninfo_all_blocks=1 00:05:47.595 --rc geninfo_unexecuted_blocks=1 00:05:47.595 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.595 ' 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.595 --rc genhtml_branch_coverage=1 00:05:47.595 --rc genhtml_function_coverage=1 00:05:47.595 --rc genhtml_legend=1 00:05:47.595 --rc geninfo_all_blocks=1 00:05:47.595 --rc geninfo_unexecuted_blocks=1 00:05:47.595 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.595 ' 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.595 --rc genhtml_branch_coverage=1 00:05:47.595 --rc genhtml_function_coverage=1 00:05:47.595 --rc genhtml_legend=1 00:05:47.595 --rc geninfo_all_blocks=1 00:05:47.595 --rc geninfo_unexecuted_blocks=1 00:05:47.595 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.595 ' 00:05:47.595 17:55:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.595 17:55:08 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.595 ************************************ 00:05:47.595 START TEST thread_poller_perf 00:05:47.595 ************************************ 00:05:47.595 17:55:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.595 [2024-10-05 17:55:08.854373] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:47.595 [2024-10-05 17:55:08.854454] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475816 ] 00:05:47.595 [2024-10-05 17:55:08.924235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.595 [2024-10-05 17:55:08.996541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.595 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.970 ====================================== 00:05:48.970 busy:2503480868 (cyc) 00:05:48.970 total_run_count: 834000 00:05:48.970 tsc_hz: 2500000000 (cyc) 00:05:48.970 ====================================== 00:05:48.970 poller_cost: 3001 (cyc), 1200 (nsec) 00:05:48.970 00:05:48.970 real 0m1.230s 00:05:48.970 user 0m1.137s 00:05:48.970 sys 0m0.089s 00:05:48.970 17:55:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.970 17:55:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.970 ************************************ 00:05:48.970 END TEST thread_poller_perf 00:05:48.970 ************************************ 00:05:48.970 17:55:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.970 17:55:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:48.970 17:55:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.970 17:55:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.970 ************************************ 00:05:48.970 START TEST thread_poller_perf 00:05:48.970 ************************************ 00:05:48.970 17:55:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.970 [2024-10-05 17:55:10.159826] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:48.970 [2024-10-05 17:55:10.159911] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476100 ] 00:05:48.970 [2024-10-05 17:55:10.229406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.970 [2024-10-05 17:55:10.300983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.970 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.906 ====================================== 00:05:49.906 busy:2501486462 (cyc) 00:05:49.906 total_run_count: 13222000 00:05:49.906 tsc_hz: 2500000000 (cyc) 00:05:49.906 ====================================== 00:05:49.906 poller_cost: 189 (cyc), 75 (nsec) 00:05:49.906 00:05:49.906 real 0m1.224s 00:05:49.906 user 0m1.135s 00:05:49.906 sys 0m0.085s 00:05:49.906 17:55:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.906 17:55:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.906 ************************************ 00:05:49.906 END TEST thread_poller_perf 00:05:49.906 ************************************ 00:05:50.165 17:55:11 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:50.165 17:55:11 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:50.165 17:55:11 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.165 17:55:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.165 17:55:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.165 ************************************ 00:05:50.165 START TEST thread_spdk_lock 00:05:50.165 ************************************ 00:05:50.165 17:55:11 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:50.165 [2024-10-05 17:55:11.458768] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:50.165 [2024-10-05 17:55:11.458884] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476277 ] 00:05:50.165 [2024-10-05 17:55:11.528315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.165 [2024-10-05 17:55:11.601620] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.165 [2024-10-05 17:55:11.601622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.732 [2024-10-05 17:55:12.099290] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.732 [2024-10-05 17:55:12.099325] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3099:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:50.732 [2024-10-05 17:55:12.099335] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3054:sspin_stacks_print: *ERROR*: spinlock 0x14c6500 00:05:50.732 [2024-10-05 17:55:12.100228] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.732 [2024-10-05 17:55:12.100331] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.732 [2024-10-05 17:55:12.100351] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.732 Starting test contend 00:05:50.732 Worker Delay Wait us Hold us Total us 00:05:50.732 0 3 169281 190324 359605 00:05:50.732 1 5 87184 289533 376718 00:05:50.732 PASS test contend 00:05:50.732 Starting test hold_by_poller 00:05:50.732 PASS test hold_by_poller 00:05:50.732 Starting test hold_by_message 00:05:50.732 PASS test hold_by_message 00:05:50.732 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:50.732 100014 assertions passed 00:05:50.732 0 assertions failed 00:05:50.732 00:05:50.732 real 0m0.719s 00:05:50.732 user 0m1.124s 00:05:50.732 sys 0m0.090s 00:05:50.732 17:55:12 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.732 17:55:12 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:50.732 ************************************ 00:05:50.732 END TEST thread_spdk_lock 00:05:50.732 ************************************ 00:05:50.991 00:05:50.991 real 0m3.561s 00:05:50.991 user 0m3.567s 00:05:50.991 sys 0m0.510s 00:05:50.991 17:55:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.991 17:55:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.991 ************************************ 00:05:50.991 END TEST thread 00:05:50.991 ************************************ 00:05:50.991 17:55:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:50.991 17:55:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:50.991 17:55:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.991 17:55:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.991 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:05:50.991 ************************************ 00:05:50.991 START TEST app_cmdline 00:05:50.991 ************************************ 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:50.991 * Looking for test storage... 00:05:50.991 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.991 17:55:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.991 --rc genhtml_branch_coverage=1 00:05:50.991 --rc genhtml_function_coverage=1 00:05:50.991 --rc genhtml_legend=1 00:05:50.991 --rc geninfo_all_blocks=1 00:05:50.991 --rc geninfo_unexecuted_blocks=1 00:05:50.991 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.991 ' 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.991 --rc genhtml_branch_coverage=1 00:05:50.991 --rc genhtml_function_coverage=1 00:05:50.991 --rc genhtml_legend=1 00:05:50.991 --rc geninfo_all_blocks=1 00:05:50.991 --rc geninfo_unexecuted_blocks=1 00:05:50.991 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.991 ' 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.991 --rc genhtml_branch_coverage=1 00:05:50.991 --rc genhtml_function_coverage=1 00:05:50.991 --rc genhtml_legend=1 00:05:50.991 --rc geninfo_all_blocks=1 00:05:50.991 --rc geninfo_unexecuted_blocks=1 00:05:50.991 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.991 ' 00:05:50.991 17:55:12 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.991 --rc genhtml_branch_coverage=1 00:05:50.991 --rc genhtml_function_coverage=1 00:05:50.991 --rc genhtml_legend=1 00:05:50.991 --rc geninfo_all_blocks=1 00:05:50.991 --rc geninfo_unexecuted_blocks=1 00:05:50.991 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.991 ' 00:05:50.991 17:55:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.991 17:55:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1476469 00:05:51.249 17:55:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1476469 00:05:51.249 17:55:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1476469 ']' 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.249 17:55:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 [2024-10-05 17:55:12.477971] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:51.249 [2024-10-05 17:55:12.478047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476469 ] 00:05:51.249 [2024-10-05 17:55:12.547599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.249 [2024-10-05 17:55:12.624992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.507 17:55:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.507 17:55:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:51.507 17:55:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:51.765 { 00:05:51.765 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:05:51.765 "fields": { 00:05:51.765 "major": 25, 00:05:51.765 "minor": 1, 00:05:51.765 "patch": 0, 00:05:51.765 "suffix": "-pre", 00:05:51.765 "commit": "3950cd1bb" 00:05:51.765 } 00:05:51.765 } 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.765 17:55:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:05:51.765 17:55:13 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.765 request: 00:05:51.765 { 00:05:51.765 "method": "env_dpdk_get_mem_stats", 00:05:51.765 "req_id": 1 00:05:51.765 } 00:05:51.765 Got JSON-RPC error response 00:05:51.765 response: 00:05:51.765 { 00:05:51.765 "code": -32601, 00:05:51.765 "message": "Method not found" 00:05:51.765 } 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.024 17:55:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1476469 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1476469 ']' 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1476469 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476469 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476469' 00:05:52.024 killing process with pid 1476469 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 1476469 00:05:52.024 17:55:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 1476469 00:05:52.282 00:05:52.282 real 0m1.323s 00:05:52.282 user 0m1.478s 00:05:52.282 sys 0m0.489s 00:05:52.282 17:55:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.282 17:55:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.282 ************************************ 00:05:52.282 END TEST app_cmdline 00:05:52.282 ************************************ 00:05:52.282 17:55:13 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:52.282 17:55:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.282 17:55:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.282 17:55:13 -- common/autotest_common.sh@10 -- # set +x 00:05:52.282 ************************************ 00:05:52.282 START TEST version 00:05:52.282 ************************************ 00:05:52.282 17:55:13 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:52.540 * Looking for test storage... 00:05:52.540 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.540 17:55:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.540 17:55:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.540 17:55:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.540 17:55:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.540 17:55:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.540 17:55:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.540 17:55:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.540 17:55:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.540 17:55:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.540 17:55:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.540 17:55:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.540 17:55:13 version -- scripts/common.sh@344 -- # case "$op" in 00:05:52.540 17:55:13 version -- scripts/common.sh@345 -- # : 1 00:05:52.540 17:55:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.540 17:55:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.540 17:55:13 version -- scripts/common.sh@365 -- # decimal 1 00:05:52.540 17:55:13 version -- scripts/common.sh@353 -- # local d=1 00:05:52.540 17:55:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.540 17:55:13 version -- scripts/common.sh@355 -- # echo 1 00:05:52.540 17:55:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.540 17:55:13 version -- scripts/common.sh@366 -- # decimal 2 00:05:52.540 17:55:13 version -- scripts/common.sh@353 -- # local d=2 00:05:52.540 17:55:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.540 17:55:13 version -- scripts/common.sh@355 -- # echo 2 00:05:52.540 17:55:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.540 17:55:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.540 17:55:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.540 17:55:13 version -- scripts/common.sh@368 -- # return 0 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.540 --rc genhtml_branch_coverage=1 00:05:52.540 --rc genhtml_function_coverage=1 00:05:52.540 --rc genhtml_legend=1 00:05:52.540 --rc geninfo_all_blocks=1 00:05:52.540 --rc geninfo_unexecuted_blocks=1 00:05:52.540 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.540 ' 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.540 --rc genhtml_branch_coverage=1 00:05:52.540 --rc genhtml_function_coverage=1 00:05:52.540 --rc genhtml_legend=1 00:05:52.540 --rc geninfo_all_blocks=1 00:05:52.540 --rc geninfo_unexecuted_blocks=1 00:05:52.540 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.540 ' 00:05:52.540 17:55:13 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.540 --rc genhtml_branch_coverage=1 00:05:52.540 --rc genhtml_function_coverage=1 00:05:52.540 --rc genhtml_legend=1 00:05:52.541 --rc geninfo_all_blocks=1 00:05:52.541 --rc geninfo_unexecuted_blocks=1 00:05:52.541 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.541 ' 00:05:52.541 17:55:13 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.541 --rc genhtml_branch_coverage=1 00:05:52.541 --rc genhtml_function_coverage=1 00:05:52.541 --rc genhtml_legend=1 00:05:52.541 --rc geninfo_all_blocks=1 00:05:52.541 --rc geninfo_unexecuted_blocks=1 00:05:52.541 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.541 ' 00:05:52.541 17:55:13 version -- app/version.sh@17 -- # get_header_version major 00:05:52.541 17:55:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # cut -f2 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.541 17:55:13 version -- app/version.sh@17 -- # major=25 00:05:52.541 17:55:13 version -- app/version.sh@18 -- # get_header_version minor 00:05:52.541 17:55:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # cut -f2 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.541 17:55:13 version -- app/version.sh@18 -- # minor=1 00:05:52.541 17:55:13 version -- app/version.sh@19 -- # get_header_version patch 00:05:52.541 17:55:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # cut -f2 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.541 17:55:13 version -- app/version.sh@19 -- # patch=0 00:05:52.541 17:55:13 version -- app/version.sh@20 -- # get_header_version suffix 00:05:52.541 17:55:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # cut -f2 00:05:52.541 17:55:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.541 17:55:13 version -- app/version.sh@20 -- # suffix=-pre 00:05:52.541 17:55:13 version -- app/version.sh@22 -- # version=25.1 00:05:52.541 17:55:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:52.541 17:55:13 version -- app/version.sh@28 -- # version=25.1rc0 00:05:52.541 17:55:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:52.541 17:55:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:52.541 17:55:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:52.541 17:55:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:52.541 00:05:52.541 real 0m0.271s 00:05:52.541 user 0m0.158s 00:05:52.541 sys 0m0.166s 00:05:52.541 17:55:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.541 17:55:13 version -- common/autotest_common.sh@10 -- # set +x 00:05:52.541 ************************************ 00:05:52.541 END TEST version 00:05:52.541 ************************************ 00:05:52.541 17:55:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:52.541 17:55:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:52.541 17:55:13 -- spdk/autotest.sh@194 -- # uname -s 00:05:52.799 17:55:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:52.799 17:55:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.799 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:05:52.799 17:55:14 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:05:52.799 17:55:14 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:05:52.799 17:55:14 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:52.799 17:55:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.799 17:55:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.799 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:05:52.799 ************************************ 00:05:52.799 START TEST llvm_fuzz 00:05:52.799 ************************************ 00:05:52.799 17:55:14 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:52.799 * Looking for test storage... 00:05:52.799 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:05:52.799 17:55:14 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.799 17:55:14 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.799 17:55:14 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.799 17:55:14 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.799 17:55:14 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.058 17:55:14 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.058 ' 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.058 ' 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.058 ' 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.058 ' 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:53.058 17:55:14 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.058 17:55:14 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:05:53.058 ************************************ 00:05:53.058 START TEST nvmf_llvm_fuzz 00:05:53.058 ************************************ 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:53.059 * Looking for test storage... 00:05:53.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.059 --rc genhtml_branch_coverage=1 00:05:53.059 --rc genhtml_function_coverage=1 00:05:53.059 --rc genhtml_legend=1 00:05:53.059 --rc geninfo_all_blocks=1 00:05:53.059 --rc geninfo_unexecuted_blocks=1 00:05:53.059 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.059 ' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.059 --rc genhtml_branch_coverage=1 00:05:53.059 --rc genhtml_function_coverage=1 00:05:53.059 --rc genhtml_legend=1 00:05:53.059 --rc geninfo_all_blocks=1 00:05:53.059 --rc geninfo_unexecuted_blocks=1 00:05:53.059 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.059 ' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.059 --rc genhtml_branch_coverage=1 00:05:53.059 --rc genhtml_function_coverage=1 00:05:53.059 --rc genhtml_legend=1 00:05:53.059 --rc geninfo_all_blocks=1 00:05:53.059 --rc geninfo_unexecuted_blocks=1 00:05:53.059 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.059 ' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.059 --rc genhtml_branch_coverage=1 00:05:53.059 --rc genhtml_function_coverage=1 00:05:53.059 --rc genhtml_legend=1 00:05:53.059 --rc geninfo_all_blocks=1 00:05:53.059 --rc geninfo_unexecuted_blocks=1 00:05:53.059 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.059 ' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:53.059 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:05:53.319 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:53.320 #define SPDK_CONFIG_H 00:05:53.320 #define SPDK_CONFIG_AIO_FSDEV 1 00:05:53.320 #define SPDK_CONFIG_APPS 1 00:05:53.320 #define SPDK_CONFIG_ARCH native 00:05:53.320 #undef SPDK_CONFIG_ASAN 00:05:53.320 #undef SPDK_CONFIG_AVAHI 00:05:53.320 #undef SPDK_CONFIG_CET 00:05:53.320 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:05:53.320 #define SPDK_CONFIG_COVERAGE 1 00:05:53.320 #define SPDK_CONFIG_CROSS_PREFIX 00:05:53.320 #undef SPDK_CONFIG_CRYPTO 00:05:53.320 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:53.320 #undef SPDK_CONFIG_CUSTOMOCF 00:05:53.320 #undef SPDK_CONFIG_DAOS 00:05:53.320 #define SPDK_CONFIG_DAOS_DIR 00:05:53.320 #define SPDK_CONFIG_DEBUG 1 00:05:53.320 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:53.320 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:53.320 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:53.320 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:53.320 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:53.320 #undef SPDK_CONFIG_DPDK_UADK 00:05:53.320 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:05:53.320 #define SPDK_CONFIG_EXAMPLES 1 00:05:53.320 #undef SPDK_CONFIG_FC 00:05:53.320 #define SPDK_CONFIG_FC_PATH 00:05:53.320 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:53.320 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:53.320 #define SPDK_CONFIG_FSDEV 1 00:05:53.320 #undef SPDK_CONFIG_FUSE 00:05:53.320 #define SPDK_CONFIG_FUZZER 1 00:05:53.320 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:05:53.320 #undef SPDK_CONFIG_GOLANG 00:05:53.320 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:53.320 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:53.320 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:53.320 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:53.320 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:53.320 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:53.320 #undef SPDK_CONFIG_HAVE_LZ4 00:05:53.320 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:05:53.320 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:05:53.320 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:53.320 #define SPDK_CONFIG_IDXD 1 00:05:53.320 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:53.320 #undef SPDK_CONFIG_IPSEC_MB 00:05:53.320 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:53.320 #define SPDK_CONFIG_ISAL 1 00:05:53.320 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:53.320 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:53.320 #define SPDK_CONFIG_LIBDIR 00:05:53.320 #undef SPDK_CONFIG_LTO 00:05:53.320 #define SPDK_CONFIG_MAX_LCORES 128 00:05:53.320 #define SPDK_CONFIG_NVME_CUSE 1 00:05:53.320 #undef SPDK_CONFIG_OCF 00:05:53.320 #define SPDK_CONFIG_OCF_PATH 00:05:53.320 #define SPDK_CONFIG_OPENSSL_PATH 00:05:53.320 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:53.320 #define SPDK_CONFIG_PGO_DIR 00:05:53.320 #undef SPDK_CONFIG_PGO_USE 00:05:53.320 #define SPDK_CONFIG_PREFIX /usr/local 00:05:53.320 #undef SPDK_CONFIG_RAID5F 00:05:53.320 #undef SPDK_CONFIG_RBD 00:05:53.320 #define SPDK_CONFIG_RDMA 1 00:05:53.320 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:53.320 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:53.320 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:53.320 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:53.320 #undef SPDK_CONFIG_SHARED 00:05:53.320 #undef SPDK_CONFIG_SMA 00:05:53.320 #define SPDK_CONFIG_TESTS 1 00:05:53.320 #undef SPDK_CONFIG_TSAN 00:05:53.320 #define SPDK_CONFIG_UBLK 1 00:05:53.320 #define SPDK_CONFIG_UBSAN 1 00:05:53.320 #undef SPDK_CONFIG_UNIT_TESTS 00:05:53.320 #undef SPDK_CONFIG_URING 00:05:53.320 #define SPDK_CONFIG_URING_PATH 00:05:53.320 #undef SPDK_CONFIG_URING_ZNS 00:05:53.320 #undef SPDK_CONFIG_USDT 00:05:53.320 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:53.320 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:53.320 #define SPDK_CONFIG_VFIO_USER 1 00:05:53.320 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:53.320 #define SPDK_CONFIG_VHOST 1 00:05:53.320 #define SPDK_CONFIG_VIRTIO 1 00:05:53.320 #undef SPDK_CONFIG_VTUNE 00:05:53.320 #define SPDK_CONFIG_VTUNE_DIR 00:05:53.320 #define SPDK_CONFIG_WERROR 1 00:05:53.320 #define SPDK_CONFIG_WPDK_DIR 00:05:53.320 #undef SPDK_CONFIG_XNVME 00:05:53.320 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.320 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:05:53.321 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1476976 ]] 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1476976 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:05:53.322 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.HyF6pK 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.HyF6pK/tests/nvmf /tmp/spdk.HyF6pK 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=678330368 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4606099456 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=52986105856 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730590720 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8744484864 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860529664 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865293312 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340125696 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346118144 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864310272 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865297408 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=987136 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:05:53.323 * Looking for test storage... 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=52986105856 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10959077376 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.323 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.323 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.324 --rc genhtml_branch_coverage=1 00:05:53.324 --rc genhtml_function_coverage=1 00:05:53.324 --rc genhtml_legend=1 00:05:53.324 --rc geninfo_all_blocks=1 00:05:53.324 --rc geninfo_unexecuted_blocks=1 00:05:53.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.324 ' 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.324 --rc genhtml_branch_coverage=1 00:05:53.324 --rc genhtml_function_coverage=1 00:05:53.324 --rc genhtml_legend=1 00:05:53.324 --rc geninfo_all_blocks=1 00:05:53.324 --rc geninfo_unexecuted_blocks=1 00:05:53.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.324 ' 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.324 --rc genhtml_branch_coverage=1 00:05:53.324 --rc genhtml_function_coverage=1 00:05:53.324 --rc genhtml_legend=1 00:05:53.324 --rc geninfo_all_blocks=1 00:05:53.324 --rc geninfo_unexecuted_blocks=1 00:05:53.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.324 ' 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.324 --rc genhtml_branch_coverage=1 00:05:53.324 --rc genhtml_function_coverage=1 00:05:53.324 --rc genhtml_legend=1 00:05:53.324 --rc geninfo_all_blocks=1 00:05:53.324 --rc geninfo_unexecuted_blocks=1 00:05:53.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.324 ' 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:05:53.324 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:53.582 17:55:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:05:53.582 [2024-10-05 17:55:14.824259] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:53.582 [2024-10-05 17:55:14.824329] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477216 ] 00:05:53.839 [2024-10-05 17:55:15.083119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.839 [2024-10-05 17:55:15.176091] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.839 [2024-10-05 17:55:15.234784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.839 [2024-10-05 17:55:15.251132] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:05:53.839 INFO: Running with entropic power schedule (0xFF, 100). 00:05:53.839 INFO: Seed: 2369731146 00:05:53.839 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:05:53.839 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:05:53.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:05:53.839 INFO: A corpus is not provided, starting from an empty corpus 00:05:53.839 #2 INITED exec/s: 0 rss: 65Mb 00:05:53.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:53.839 This may also happen if the target rejected all inputs we tried so far 00:05:53.839 [2024-10-05 17:55:15.299772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:53.839 [2024-10-05 17:55:15.299800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.354 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:05:54.354 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:54.354 #42 NEW cov: 12169 ft: 12139 corp: 2/81b lim: 320 exec/s: 0 rss: 73Mb L: 80/80 MS: 5 ChangeBit-ChangeByte-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:05:54.354 [2024-10-05 17:55:15.610586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.354 [2024-10-05 17:55:15.610620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.354 #47 NEW cov: 12282 ft: 12779 corp: 3/207b lim: 320 exec/s: 0 rss: 73Mb L: 126/126 MS: 5 CopyPart-InsertByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:05:54.354 [2024-10-05 17:55:15.650930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.354 [2024-10-05 17:55:15.650955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.354 [2024-10-05 17:55:15.651036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:5 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.354 [2024-10-05 17:55:15.651050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.354 [2024-10-05 17:55:15.651109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.354 [2024-10-05 17:55:15.651123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.354 NEW_FUNC[1/1]: 0x192a068 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:05:54.355 #48 NEW cov: 12320 ft: 13592 corp: 4/421b lim: 320 exec/s: 0 rss: 73Mb L: 214/214 MS: 1 CopyPart- 00:05:54.355 [2024-10-05 17:55:15.710800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.355 [2024-10-05 17:55:15.710827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.355 #49 NEW cov: 12405 ft: 13939 corp: 5/547b lim: 320 exec/s: 0 rss: 73Mb L: 126/214 MS: 1 ChangeASCIIInt- 00:05:54.355 [2024-10-05 17:55:15.750870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:54.355 [2024-10-05 17:55:15.750896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.355 #50 NEW cov: 12405 ft: 14119 corp: 6/627b lim: 320 exec/s: 0 rss: 73Mb L: 80/214 MS: 1 ChangeByte- 00:05:54.355 [2024-10-05 17:55:15.811336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.355 [2024-10-05 17:55:15.811364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.355 [2024-10-05 17:55:15.811423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.355 [2024-10-05 17:55:15.811437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.355 [2024-10-05 17:55:15.811494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.355 [2024-10-05 17:55:15.811507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.613 #51 NEW cov: 12407 ft: 14173 corp: 7/881b lim: 320 exec/s: 0 rss: 73Mb L: 254/254 MS: 1 InsertRepeatedBytes- 00:05:54.613 [2024-10-05 17:55:15.851192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.613 [2024-10-05 17:55:15.851236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.613 #52 NEW cov: 12407 ft: 14203 corp: 8/1007b lim: 320 exec/s: 0 rss: 73Mb L: 126/254 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\363"- 00:05:54.613 [2024-10-05 17:55:15.891367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.613 [2024-10-05 17:55:15.891393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.613 [2024-10-05 17:55:15.891448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.613 [2024-10-05 17:55:15.891462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.613 [2024-10-05 17:55:15.891516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.613 [2024-10-05 17:55:15.891530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.613 #53 NEW cov: 12407 ft: 14342 corp: 9/1261b lim: 320 exec/s: 0 rss: 73Mb L: 254/254 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:54.613 [2024-10-05 17:55:15.951485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d72 cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.613 [2024-10-05 17:55:15.951511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.613 #54 NEW cov: 12407 ft: 14381 corp: 10/1387b lim: 320 exec/s: 0 rss: 73Mb L: 126/254 MS: 1 ChangeBinInt- 00:05:54.613 [2024-10-05 17:55:16.011641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d72 cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.613 [2024-10-05 17:55:16.011667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.613 #55 NEW cov: 12407 ft: 14448 corp: 11/1513b lim: 320 exec/s: 0 rss: 73Mb L: 126/254 MS: 1 ChangeByte- 00:05:54.613 [2024-10-05 17:55:16.071862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.613 [2024-10-05 17:55:16.071889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 #56 NEW cov: 12407 ft: 14500 corp: 12/1604b lim: 320 exec/s: 0 rss: 73Mb L: 91/254 MS: 1 EraseBytes- 00:05:54.872 [2024-10-05 17:55:16.112151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.872 [2024-10-05 17:55:16.112176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.112250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.872 [2024-10-05 17:55:16.112265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.112322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.872 [2024-10-05 17:55:16.112335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.872 #57 NEW cov: 12407 ft: 14520 corp: 13/1858b lim: 320 exec/s: 0 rss: 73Mb L: 254/254 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:54.872 [2024-10-05 17:55:16.152290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.872 [2024-10-05 17:55:16.152316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.152372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.872 [2024-10-05 17:55:16.152385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.152451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.872 [2024-10-05 17:55:16.152481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:54.872 #63 NEW cov: 12407 ft: 14892 corp: 14/2112b lim: 320 exec/s: 0 rss: 73Mb L: 254/254 MS: 1 CrossOver- 00:05:54.872 [2024-10-05 17:55:16.192128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:54.872 [2024-10-05 17:55:16.192154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:05:54.872 #64 NEW cov: 12430 ft: 14999 corp: 15/2204b lim: 320 exec/s: 0 rss: 73Mb L: 92/254 MS: 1 InsertByte- 00:05:54.872 [2024-10-05 17:55:16.252277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:54.872 [2024-10-05 17:55:16.252302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 #65 NEW cov: 12430 ft: 15017 corp: 16/2285b lim: 320 exec/s: 65 rss: 73Mb L: 81/254 MS: 1 InsertByte- 00:05:54.872 [2024-10-05 17:55:16.312743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x676d6d6d6d6d6d6d 00:05:54.872 [2024-10-05 17:55:16.312768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.312825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:54.872 [2024-10-05 17:55:16.312838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:54.872 [2024-10-05 17:55:16.312903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:54.872 [2024-10-05 17:55:16.312917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.131 #66 NEW cov: 12430 ft: 15062 corp: 17/2539b lim: 320 exec/s: 66 rss: 73Mb L: 254/254 MS: 1 ChangeByte- 00:05:55.131 [2024-10-05 17:55:16.372618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.372643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.131 #67 NEW cov: 12430 ft: 15085 corp: 18/2666b lim: 320 exec/s: 67 rss: 73Mb L: 127/254 MS: 1 InsertByte- 00:05:55.131 [2024-10-05 17:55:16.412727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.412754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.131 #68 NEW cov: 12430 ft: 15095 corp: 19/2757b lim: 320 exec/s: 68 rss: 73Mb L: 91/254 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:55.131 [2024-10-05 17:55:16.452850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.452876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.131 #69 NEW cov: 12430 ft: 15102 corp: 20/2856b lim: 320 exec/s: 69 rss: 73Mb L: 99/254 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:55.131 [2024-10-05 17:55:16.493232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.493257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.131 [2024-10-05 17:55:16.493317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.131 [2024-10-05 17:55:16.493330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.131 [2024-10-05 17:55:16.493387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.131 [2024-10-05 17:55:16.493401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.131 #70 NEW cov: 12430 ft: 15194 corp: 21/3110b lim: 320 exec/s: 70 rss: 73Mb L: 254/254 MS: 1 ChangeBinInt- 00:05:55.131 [2024-10-05 17:55:16.553539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.553566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.131 [2024-10-05 17:55:16.553639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.131 [2024-10-05 17:55:16.553653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.131 [2024-10-05 17:55:16.553710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.131 [2024-10-05 17:55:16.553723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.131 [2024-10-05 17:55:16.553781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d 00:05:55.131 [2024-10-05 17:55:16.553795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:55.389 #71 NEW cov: 12430 ft: 15372 corp: 22/3372b lim: 320 exec/s: 71 rss: 73Mb L: 262/262 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:55.389 [2024-10-05 17:55:16.613661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.389 [2024-10-05 17:55:16.613688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.389 [2024-10-05 17:55:16.613753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:5 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.389 [2024-10-05 17:55:16.613767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.389 [2024-10-05 17:55:16.613832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.389 [2024-10-05 17:55:16.613846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.389 #72 NEW cov: 12430 ft: 15416 corp: 23/3586b lim: 320 exec/s: 72 rss: 74Mb L: 214/262 MS: 1 CrossOver- 00:05:55.389 [2024-10-05 17:55:16.673450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.389 [2024-10-05 17:55:16.673476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.389 #73 NEW cov: 12430 ft: 15439 corp: 24/3678b lim: 320 exec/s: 73 rss: 74Mb L: 92/262 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\003\363"- 00:05:55.389 [2024-10-05 17:55:16.733932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:55.390 [2024-10-05 17:55:16.733957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.390 [2024-10-05 17:55:16.734023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d6) qid:0 cid:5 nsid:d6d6d6d6 cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.390 [2024-10-05 17:55:16.734037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.390 [2024-10-05 17:55:16.734101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.390 [2024-10-05 17:55:16.734115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.390 #74 NEW cov: 12430 ft: 15455 corp: 25/3884b lim: 320 exec/s: 74 rss: 74Mb L: 206/262 MS: 1 CrossOver- 00:05:55.390 [2024-10-05 17:55:16.773727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x300006d6d6d6d6d 00:05:55.390 [2024-10-05 17:55:16.773752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.390 #77 NEW cov: 12430 ft: 15467 corp: 26/4002b lim: 320 exec/s: 77 rss: 74Mb L: 118/262 MS: 3 EraseBytes-ChangeASCIIInt-InsertRepeatedBytes- 00:05:55.390 [2024-10-05 17:55:16.834150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.390 [2024-10-05 17:55:16.834176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.390 [2024-10-05 17:55:16.834264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.390 [2024-10-05 17:55:16.834279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.390 [2024-10-05 17:55:16.834334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.390 [2024-10-05 17:55:16.834347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.648 #78 NEW cov: 12430 ft: 15478 corp: 27/4256b lim: 320 exec/s: 78 rss: 74Mb L: 254/262 MS: 1 ShuffleBytes- 00:05:55.648 [2024-10-05 17:55:16.874213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.648 [2024-10-05 17:55:16.874239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.648 [2024-10-05 17:55:16.874321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:5 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.648 [2024-10-05 17:55:16.874336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.648 #79 NEW cov: 12447 ft: 15662 corp: 28/4384b lim: 320 exec/s: 79 rss: 74Mb L: 128/262 MS: 1 CrossOver- 00:05:55.648 [2024-10-05 17:55:16.914114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.648 [2024-10-05 17:55:16.914139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.648 #80 NEW cov: 12447 ft: 15684 corp: 29/4511b lim: 320 exec/s: 80 rss: 74Mb L: 127/262 MS: 1 InsertByte- 00:05:55.648 [2024-10-05 17:55:16.954208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:55.648 [2024-10-05 17:55:16.954233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.648 #81 NEW cov: 12447 ft: 15759 corp: 30/4612b lim: 320 exec/s: 81 rss: 74Mb L: 101/262 MS: 1 CopyPart- 00:05:55.648 [2024-10-05 17:55:16.994513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.648 [2024-10-05 17:55:16.994539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.648 [2024-10-05 17:55:16.994620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:05:55.648 [2024-10-05 17:55:16.994634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.648 #82 NEW cov: 12447 ft: 15846 corp: 31/4803b lim: 320 exec/s: 82 rss: 74Mb L: 191/262 MS: 1 InsertRepeatedBytes- 00:05:55.648 [2024-10-05 17:55:17.054866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.648 [2024-10-05 17:55:17.054891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.648 [2024-10-05 17:55:17.054972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:5 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.648 [2024-10-05 17:55:17.054986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:55.648 [2024-10-05 17:55:17.055055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (6d) qid:0 cid:6 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:05:55.648 [2024-10-05 17:55:17.055069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:55.648 #83 NEW cov: 12447 ft: 15884 corp: 32/5017b lim: 320 exec/s: 83 rss: 74Mb L: 214/262 MS: 1 ShuffleBytes- 00:05:55.908 [2024-10-05 17:55:17.114729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6b5d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:55.908 [2024-10-05 17:55:17.114754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.908 #84 NEW cov: 12447 ft: 15894 corp: 33/5114b lim: 320 exec/s: 84 rss: 74Mb L: 97/262 MS: 1 CopyPart- 00:05:55.908 [2024-10-05 17:55:17.154830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:000001d6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:55.908 [2024-10-05 17:55:17.154855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.908 #85 NEW cov: 12447 ft: 15906 corp: 34/5194b lim: 320 exec/s: 85 rss: 74Mb L: 80/262 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:05:55.908 [2024-10-05 17:55:17.194943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d6d6d6d6d6 00:05:55.908 [2024-10-05 17:55:17.194968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.908 #86 NEW cov: 12447 ft: 15910 corp: 35/5274b lim: 320 exec/s: 86 rss: 74Mb L: 80/262 MS: 1 ShuffleBytes- 00:05:55.908 [2024-10-05 17:55:17.235019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:6d6d6d6d cdw10:6d6d6d6d cdw11:6d6d6d6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x6d6d6d6d6d6d6d6d 00:05:55.908 [2024-10-05 17:55:17.235044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.908 #87 NEW cov: 12447 ft: 15970 corp: 36/5366b lim: 320 exec/s: 87 rss: 74Mb L: 92/262 MS: 1 ChangeBit- 00:05:55.908 [2024-10-05 17:55:17.275152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2f) qid:0 cid:4 nsid:d6d6d6d6 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd6d6d6d641d6d6d6 00:05:55.908 [2024-10-05 17:55:17.275177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:55.908 #88 NEW cov: 12447 ft: 15987 corp: 37/5446b lim: 320 exec/s: 44 rss: 74Mb L: 80/262 MS: 1 ChangeByte- 00:05:55.908 #88 DONE cov: 12447 ft: 15987 corp: 37/5446b lim: 320 exec/s: 44 rss: 74Mb 00:05:55.908 ###### Recommended dictionary. ###### 00:05:55.908 "\001\000\000\000\000\000\003\363" # Uses: 6 00:05:55.908 "\001\000\000\000\000\000\000\000" # Uses: 0 00:05:55.908 ###### End of recommended dictionary. ###### 00:05:55.908 Done 88 runs in 2 second(s) 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:56.166 17:55:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:05:56.166 [2024-10-05 17:55:17.466894] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:56.166 [2024-10-05 17:55:17.466966] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477558 ] 00:05:56.424 [2024-10-05 17:55:17.718540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.424 [2024-10-05 17:55:17.800622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.424 [2024-10-05 17:55:17.859306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.424 [2024-10-05 17:55:17.875649] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:05:56.681 INFO: Running with entropic power schedule (0xFF, 100). 00:05:56.681 INFO: Seed: 699777376 00:05:56.681 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:05:56.681 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:05:56.682 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:05:56.682 INFO: A corpus is not provided, starting from an empty corpus 00:05:56.682 #2 INITED exec/s: 0 rss: 66Mb 00:05:56.682 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:56.682 This may also happen if the target rejected all inputs we tried so far 00:05:56.682 [2024-10-05 17:55:17.934608] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (908768) > buf size (4096) 00:05:56.682 [2024-10-05 17:55:17.934825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:77778377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.682 [2024-10-05 17:55:17.934853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.940 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:05:56.940 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:56.940 #9 NEW cov: 12252 ft: 12249 corp: 2/7b lim: 30 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:05:56.940 [2024-10-05 17:55:18.265594] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (49632) > buf size (4096) 00:05:56.940 [2024-10-05 17:55:18.265851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.265917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.940 #13 NEW cov: 12365 ft: 13015 corp: 3/13b lim: 30 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 CrossOver-InsertByte-CrossOver-CopyPart- 00:05:56.940 [2024-10-05 17:55:18.315577] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:56.940 [2024-10-05 17:55:18.315698] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:56.940 [2024-10-05 17:55:18.315809] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:05:56.940 [2024-10-05 17:55:18.316021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.316047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.940 [2024-10-05 17:55:18.316104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.316119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.940 [2024-10-05 17:55:18.316174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.316191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:56.940 #18 NEW cov: 12377 ft: 13536 corp: 4/31b lim: 30 exec/s: 0 rss: 74Mb L: 18/18 MS: 5 InsertByte-ChangeBit-CopyPart-CopyPart-InsertRepeatedBytes- 00:05:56.940 [2024-10-05 17:55:18.355631] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003b0a 00:05:56.940 [2024-10-05 17:55:18.355850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a3b020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.355877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.940 #22 NEW cov: 12462 ft: 13811 corp: 5/37b lim: 30 exec/s: 0 rss: 74Mb L: 6/18 MS: 4 InsertByte-ShuffleBytes-InsertByte-CopyPart- 00:05:56.940 [2024-10-05 17:55:18.395811] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:56.940 [2024-10-05 17:55:18.395929] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:56.940 [2024-10-05 17:55:18.396038] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:05:56.940 [2024-10-05 17:55:18.396255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.396281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:56.940 [2024-10-05 17:55:18.396339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.396353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:56.940 [2024-10-05 17:55:18.396409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:56.940 [2024-10-05 17:55:18.396423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.197 #23 NEW cov: 12462 ft: 13891 corp: 6/55b lim: 30 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:05:57.197 [2024-10-05 17:55:18.455939] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (908768) > buf size (4096) 00:05:57.197 [2024-10-05 17:55:18.456152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:77778377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.456181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.197 #24 NEW cov: 12462 ft: 13979 corp: 7/61b lim: 30 exec/s: 0 rss: 74Mb L: 6/18 MS: 1 ChangeBit- 00:05:57.197 [2024-10-05 17:55:18.516089] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (908800) > buf size (4096) 00:05:57.197 [2024-10-05 17:55:18.516329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:777f8377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.516354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.197 #25 NEW cov: 12462 ft: 14147 corp: 8/67b lim: 30 exec/s: 0 rss: 74Mb L: 6/18 MS: 1 ChangeBit- 00:05:57.197 [2024-10-05 17:55:18.556174] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (836064) > buf size (4096) 00:05:57.197 [2024-10-05 17:55:18.556413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30778377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.556439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.197 #26 NEW cov: 12462 ft: 14199 corp: 9/73b lim: 30 exec/s: 0 rss: 74Mb L: 6/18 MS: 1 ChangeByte- 00:05:57.197 [2024-10-05 17:55:18.596293] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000267f 00:05:57.197 [2024-10-05 17:55:18.596504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:77778377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.596528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.197 #32 NEW cov: 12462 ft: 14267 corp: 10/80b lim: 30 exec/s: 0 rss: 74Mb L: 7/18 MS: 1 InsertByte- 00:05:57.197 [2024-10-05 17:55:18.656548] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003b0a 00:05:57.197 [2024-10-05 17:55:18.656865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a3b020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.656892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.197 [2024-10-05 17:55:18.656949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.197 [2024-10-05 17:55:18.656964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.454 #33 NEW cov: 12479 ft: 14594 corp: 11/94b lim: 30 exec/s: 0 rss: 74Mb L: 14/18 MS: 1 InsertRepeatedBytes- 00:05:57.455 [2024-10-05 17:55:18.716725] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.455 [2024-10-05 17:55:18.716857] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.455 [2024-10-05 17:55:18.716967] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:05:57.455 [2024-10-05 17:55:18.717193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.717220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.717279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.717294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.717360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.717377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.455 #34 NEW cov: 12479 ft: 14612 corp: 12/112b lim: 30 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 CopyPart- 00:05:57.455 [2024-10-05 17:55:18.756774] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003b0a 00:05:57.455 [2024-10-05 17:55:18.757102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a3b020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.757128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.757191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.757206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.455 #35 NEW cov: 12479 ft: 14652 corp: 13/126b lim: 30 exec/s: 0 rss: 74Mb L: 14/18 MS: 1 ChangeBinInt- 00:05:57.455 [2024-10-05 17:55:18.816935] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (49632) > buf size (4096) 00:05:57.455 [2024-10-05 17:55:18.817072] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003131 00:05:57.455 [2024-10-05 17:55:18.817293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.817320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.817377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:31318131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.817392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.455 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:05:57.455 #36 NEW cov: 12502 ft: 14737 corp: 14/139b lim: 30 exec/s: 0 rss: 75Mb L: 13/18 MS: 1 InsertRepeatedBytes- 00:05:57.455 [2024-10-05 17:55:18.877127] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:05:57.455 [2024-10-05 17:55:18.877251] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.455 [2024-10-05 17:55:18.877358] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.455 [2024-10-05 17:55:18.877572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.877599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.877657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:40ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.877671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.455 [2024-10-05 17:55:18.877726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.455 [2024-10-05 17:55:18.877740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.714 #37 NEW cov: 12502 ft: 14769 corp: 15/162b lim: 30 exec/s: 37 rss: 75Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:05:57.714 [2024-10-05 17:55:18.937239] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:18.937457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:18.937488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.714 #38 NEW cov: 12502 ft: 14801 corp: 16/169b lim: 30 exec/s: 38 rss: 75Mb L: 7/23 MS: 1 InsertRepeatedBytes- 00:05:57.714 [2024-10-05 17:55:18.977380] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:18.977603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:18.977631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.714 #39 NEW cov: 12502 ft: 14868 corp: 17/176b lim: 30 exec/s: 39 rss: 75Mb L: 7/23 MS: 1 CopyPart- 00:05:57.714 [2024-10-05 17:55:19.037592] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3077 00:05:57.714 [2024-10-05 17:55:19.037714] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003131 00:05:57.714 [2024-10-05 17:55:19.037933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a300077 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:19.037961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.714 [2024-10-05 17:55:19.038023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:40318131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:19.038038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.714 #40 NEW cov: 12502 ft: 14985 corp: 18/190b lim: 30 exec/s: 40 rss: 75Mb L: 14/23 MS: 1 InsertByte- 00:05:57.714 [2024-10-05 17:55:19.097764] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:19.097884] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:05:57.714 [2024-10-05 17:55:19.097994] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:05:57.714 [2024-10-05 17:55:19.098234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:19.098261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.714 [2024-10-05 17:55:19.098321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:19.098337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.714 [2024-10-05 17:55:19.098394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.714 [2024-10-05 17:55:19.098409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.714 #41 NEW cov: 12502 ft: 15045 corp: 19/208b lim: 30 exec/s: 41 rss: 75Mb L: 18/23 MS: 1 ChangeBinInt- 00:05:57.714 [2024-10-05 17:55:19.137936] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:19.138071] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:19.138181] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.714 [2024-10-05 17:55:19.138297] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:57.715 [2024-10-05 17:55:19.138408] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a92 00:05:57.715 [2024-10-05 17:55:19.138628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.715 [2024-10-05 17:55:19.138654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.715 [2024-10-05 17:55:19.138717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.715 [2024-10-05 17:55:19.138731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.715 [2024-10-05 17:55:19.138788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.715 [2024-10-05 17:55:19.138803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.715 [2024-10-05 17:55:19.138859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.715 [2024-10-05 17:55:19.138872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:57.715 [2024-10-05 17:55:19.138941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff86025d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.715 [2024-10-05 17:55:19.138954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:57.715 #46 NEW cov: 12502 ft: 15592 corp: 20/238b lim: 30 exec/s: 46 rss: 75Mb L: 30/30 MS: 5 InsertByte-InsertByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:05:57.973 [2024-10-05 17:55:19.177931] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000772a 00:05:57.973 [2024-10-05 17:55:19.178162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30778377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.178206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.973 #47 NEW cov: 12502 ft: 15604 corp: 21/248b lim: 30 exec/s: 47 rss: 75Mb L: 10/30 MS: 1 CrossOver- 00:05:57.973 [2024-10-05 17:55:19.238143] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (49632) > buf size (4096) 00:05:57.973 [2024-10-05 17:55:19.238266] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003177 00:05:57.973 [2024-10-05 17:55:19.238375] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (646624) > buf size (4096) 00:05:57.973 [2024-10-05 17:55:19.238585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.238611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.973 [2024-10-05 17:55:19.238669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:31318131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.238683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.973 [2024-10-05 17:55:19.238740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:77770277 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.238754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.973 #48 NEW cov: 12502 ft: 15637 corp: 22/268b lim: 30 exec/s: 48 rss: 75Mb L: 20/30 MS: 1 CrossOver- 00:05:57.973 [2024-10-05 17:55:19.278267] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x89bf 00:05:57.973 [2024-10-05 17:55:19.278388] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100003177 00:05:57.973 [2024-10-05 17:55:19.278497] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (646624) > buf size (4096) 00:05:57.973 [2024-10-05 17:55:19.278702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.278732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.973 [2024-10-05 17:55:19.278789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:cec48131 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.278804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.973 [2024-10-05 17:55:19.278860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:77770277 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.278874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.973 #49 NEW cov: 12502 ft: 15666 corp: 23/288b lim: 30 exec/s: 49 rss: 75Mb L: 20/30 MS: 1 ChangeBinInt- 00:05:57.973 [2024-10-05 17:55:19.338391] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003b0a 00:05:57.973 [2024-10-05 17:55:19.338720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a3b020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.338746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.973 [2024-10-05 17:55:19.338805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.973 [2024-10-05 17:55:19.338819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.973 #50 NEW cov: 12502 ft: 15679 corp: 24/302b lim: 30 exec/s: 50 rss: 75Mb L: 14/30 MS: 1 ShuffleBytes- 00:05:57.973 [2024-10-05 17:55:19.398588] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:05:57.973 [2024-10-05 17:55:19.398704] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009393 00:05:57.974 [2024-10-05 17:55:19.398830] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (151120) > buf size (4096) 00:05:57.974 [2024-10-05 17:55:19.399044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30778393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.974 [2024-10-05 17:55:19.399070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:57.974 [2024-10-05 17:55:19.399129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:93938393 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.974 [2024-10-05 17:55:19.399144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:57.974 [2024-10-05 17:55:19.399202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:93930040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.974 [2024-10-05 17:55:19.399216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:57.974 #51 NEW cov: 12502 ft: 15701 corp: 25/320b lim: 30 exec/s: 51 rss: 75Mb L: 18/30 MS: 1 InsertRepeatedBytes- 00:05:58.231 [2024-10-05 17:55:19.438781] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:05:58.231 [2024-10-05 17:55:19.438903] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.231 [2024-10-05 17:55:19.439015] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.231 [2024-10-05 17:55:19.439239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.439264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.439328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:406083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.439342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.439399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.439413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.231 #52 NEW cov: 12502 ft: 15709 corp: 26/343b lim: 30 exec/s: 52 rss: 75Mb L: 23/30 MS: 1 ChangeByte- 00:05:58.231 [2024-10-05 17:55:19.498947] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.231 [2024-10-05 17:55:19.499066] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:05:58.231 [2024-10-05 17:55:19.499176] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:05:58.231 [2024-10-05 17:55:19.499284] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:05:58.231 [2024-10-05 17:55:19.499504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.499530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.499588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.499602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.499657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.499671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.499729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.499742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:58.231 #53 NEW cov: 12502 ft: 15732 corp: 27/368b lim: 30 exec/s: 53 rss: 75Mb L: 25/30 MS: 1 CrossOver- 00:05:58.231 [2024-10-05 17:55:19.559066] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x89bf 00:05:58.231 [2024-10-05 17:55:19.559183] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003277 00:05:58.231 [2024-10-05 17:55:19.559297] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (646624) > buf size (4096) 00:05:58.231 [2024-10-05 17:55:19.559522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.559547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.559602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:cec40232 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.559617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.231 [2024-10-05 17:55:19.559675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:77770277 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.559689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.231 #54 NEW cov: 12502 ft: 15744 corp: 28/388b lim: 30 exec/s: 54 rss: 76Mb L: 20/30 MS: 1 ChangeASCIIInt- 00:05:58.231 [2024-10-05 17:55:19.619151] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (835844) > buf size (4096) 00:05:58.231 [2024-10-05 17:55:19.619376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30408377 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.231 [2024-10-05 17:55:19.619402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.231 #55 NEW cov: 12502 ft: 15755 corp: 29/394b lim: 30 exec/s: 55 rss: 76Mb L: 6/30 MS: 1 ShuffleBytes- 00:05:58.231 [2024-10-05 17:55:19.659296] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x778b 00:05:58.231 [2024-10-05 17:55:19.659535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.232 [2024-10-05 17:55:19.659561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.232 #56 NEW cov: 12502 ft: 15773 corp: 30/400b lim: 30 exec/s: 56 rss: 76Mb L: 6/30 MS: 1 ChangeByte- 00:05:58.488 [2024-10-05 17:55:19.699452] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (49632) > buf size (4096) 00:05:58.488 [2024-10-05 17:55:19.699576] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.488 [2024-10-05 17:55:19.699690] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:05:58.488 [2024-10-05 17:55:19.699912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:30770040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.699938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.699994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.700008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.700064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0277 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.700077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.488 #57 NEW cov: 12502 ft: 15832 corp: 31/420b lim: 30 exec/s: 57 rss: 76Mb L: 20/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:05:58.488 [2024-10-05 17:55:19.739559] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007a3b 00:05:58.488 [2024-10-05 17:55:19.739681] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:05:58.488 [2024-10-05 17:55:19.739892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7a3b020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.739918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.739973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a0083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.739987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.488 #58 NEW cov: 12502 ft: 15852 corp: 32/435b lim: 30 exec/s: 58 rss: 76Mb L: 15/30 MS: 1 CrossOver- 00:05:58.488 [2024-10-05 17:55:19.779819] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796932) > buf size (4096) 00:05:58.488 [2024-10-05 17:55:19.780036] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:05:58.488 [2024-10-05 17:55:19.780152] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.488 [2024-10-05 17:55:19.780271] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000a0a 00:05:58.488 [2024-10-05 17:55:19.780499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.780525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.780579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.780593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.780645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.780659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.780711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.780725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:58.488 [2024-10-05 17:55:19.780779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.488 [2024-10-05 17:55:19.780793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:58.488 #59 NEW cov: 12502 ft: 15891 corp: 33/465b lim: 30 exec/s: 59 rss: 76Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:05:58.488 [2024-10-05 17:55:19.819935] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.488 [2024-10-05 17:55:19.820056] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.488 [2024-10-05 17:55:19.820167] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.488 [2024-10-05 17:55:19.820279] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.489 [2024-10-05 17:55:19.820392] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a92 00:05:58.489 [2024-10-05 17:55:19.820616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.820642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.489 [2024-10-05 17:55:19.820698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.820712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:58.489 [2024-10-05 17:55:19.820765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.820779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:58.489 [2024-10-05 17:55:19.820831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.820844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:58.489 [2024-10-05 17:55:19.820895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff86025d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.820912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:58.489 #60 NEW cov: 12502 ft: 15902 corp: 34/495b lim: 30 exec/s: 60 rss: 76Mb L: 30/30 MS: 1 CopyPart- 00:05:58.489 [2024-10-05 17:55:19.879937] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:05:58.489 [2024-10-05 17:55:19.880154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:58.489 [2024-10-05 17:55:19.880180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:58.489 #61 NEW cov: 12502 ft: 15922 corp: 35/505b lim: 30 exec/s: 30 rss: 76Mb L: 10/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:05:58.489 #61 DONE cov: 12502 ft: 15922 corp: 35/505b lim: 30 exec/s: 30 rss: 76Mb 00:05:58.489 ###### Recommended dictionary. ###### 00:05:58.489 "\377\377\377\377\377\377\377\377" # Uses: 2 00:05:58.489 ###### End of recommended dictionary. ###### 00:05:58.489 Done 61 runs in 2 second(s) 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:58.746 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:05:58.747 17:55:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:05:58.747 [2024-10-05 17:55:20.091220] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:58.747 [2024-10-05 17:55:20.091291] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478041 ] 00:05:59.004 [2024-10-05 17:55:20.272359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.004 [2024-10-05 17:55:20.339635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.004 [2024-10-05 17:55:20.398875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.004 [2024-10-05 17:55:20.415228] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:05:59.004 INFO: Running with entropic power schedule (0xFF, 100). 00:05:59.004 INFO: Seed: 3238778314 00:05:59.004 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:05:59.004 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:05:59.004 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:05:59.004 INFO: A corpus is not provided, starting from an empty corpus 00:05:59.004 #2 INITED exec/s: 0 rss: 65Mb 00:05:59.004 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:05:59.004 This may also happen if the target rejected all inputs we tried so far 00:05:59.261 [2024-10-05 17:55:20.474345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.261 [2024-10-05 17:55:20.474373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.261 [2024-10-05 17:55:20.474446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.261 [2024-10-05 17:55:20.474471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.261 [2024-10-05 17:55:20.474523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.261 [2024-10-05 17:55:20.474537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.261 [2024-10-05 17:55:20.474589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.261 [2024-10-05 17:55:20.474602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.261 [2024-10-05 17:55:20.474652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:2800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.261 [2024-10-05 17:55:20.474666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:59.517 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:05:59.517 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:05:59.517 #9 NEW cov: 12191 ft: 12189 corp: 2/36b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:05:59.517 [2024-10-05 17:55:20.805623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.805710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.805839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.805881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.805990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.806030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.806140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.806199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.517 #10 NEW cov: 12304 ft: 12940 corp: 3/69b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 CrossOver- 00:05:59.517 [2024-10-05 17:55:20.874871] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:59.517 [2024-10-05 17:55:20.875197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.875224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.875284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.875298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.875352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.875367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.875420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.875434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.517 #11 NEW cov: 12321 ft: 13344 corp: 4/102b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ChangeBinInt- 00:05:59.517 [2024-10-05 17:55:20.934971] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:59.517 [2024-10-05 17:55:20.935311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.935339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.935394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.935409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.935459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.517 [2024-10-05 17:55:20.935475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.517 [2024-10-05 17:55:20.935526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.518 [2024-10-05 17:55:20.935541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.518 #12 NEW cov: 12406 ft: 13581 corp: 5/135b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ShuffleBytes- 00:05:59.774 [2024-10-05 17:55:20.995180] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:59.774 [2024-10-05 17:55:20.995519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:20.995544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:20.995601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:20.995619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:20.995672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:20.995688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:20.995742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:20.995756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.774 #13 NEW cov: 12406 ft: 13719 corp: 6/169b lim: 35 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 CrossOver- 00:05:59.774 [2024-10-05 17:55:21.055731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.055756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:21.055828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.055842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:21.055897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.055911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:21.055968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.055981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:21.056037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.056050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:05:59.774 #14 NEW cov: 12406 ft: 13817 corp: 7/204b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:05:59.774 [2024-10-05 17:55:21.095732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.095756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.774 [2024-10-05 17:55:21.095827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.774 [2024-10-05 17:55:21.095841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.095895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.095909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.095963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.095976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.775 #15 NEW cov: 12406 ft: 13934 corp: 8/237b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ChangeByte- 00:05:59.775 [2024-10-05 17:55:21.135536] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:59.775 [2024-10-05 17:55:21.135867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.135893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.135949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.135964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.136017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.136033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.136085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.136099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.775 #16 NEW cov: 12406 ft: 13944 corp: 9/271b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:05:59.775 [2024-10-05 17:55:21.196001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.196026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.196098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.196112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.196165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.196179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.196239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.196254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.775 #17 NEW cov: 12406 ft: 13974 corp: 10/300b lim: 35 exec/s: 0 rss: 74Mb L: 29/35 MS: 1 EraseBytes- 00:05:59.775 [2024-10-05 17:55:21.236359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.236385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.236440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.236455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.236506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.236525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.236576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.236590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:05:59.775 [2024-10-05 17:55:21.236643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:2800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:59.775 [2024-10-05 17:55:21.236657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.032 #18 NEW cov: 12406 ft: 14017 corp: 11/335b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:06:00.032 [2024-10-05 17:55:21.275914] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.032 [2024-10-05 17:55:21.276035] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.032 [2024-10-05 17:55:21.276259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.276285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.276340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.276355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.276408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.276423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.276476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.276491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.032 #19 NEW cov: 12406 ft: 14133 corp: 12/369b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:00.032 [2024-10-05 17:55:21.336513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.336539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.336595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.336609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.336680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.336695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.336748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.336761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.336814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.336831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.032 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:00.032 #20 NEW cov: 12429 ft: 14174 corp: 13/404b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:00.032 [2024-10-05 17:55:21.396537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.396563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.396618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.396632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.396687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.396701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.396753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff0021ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.396766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.032 #21 NEW cov: 12429 ft: 14246 corp: 14/437b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 ChangeBinInt- 00:06:00.032 [2024-10-05 17:55:21.456870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.456896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.456953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.456967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.457022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.457036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.457089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.457103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.032 [2024-10-05 17:55:21.457158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:2800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.032 [2024-10-05 17:55:21.457172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.290 #22 NEW cov: 12429 ft: 14263 corp: 15/472b lim: 35 exec/s: 22 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:00.290 [2024-10-05 17:55:21.517003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.290 [2024-10-05 17:55:21.517028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.290 [2024-10-05 17:55:21.517088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.290 [2024-10-05 17:55:21.517101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.517155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bfff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.517169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.517221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.517235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.517289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:2800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.517302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.291 #23 NEW cov: 12429 ft: 14272 corp: 16/507b lim: 35 exec/s: 23 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:06:00.291 [2024-10-05 17:55:21.556691] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.291 [2024-10-05 17:55:21.557046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.557072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.557128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.557142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.557200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.557215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.557279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.557293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.291 #24 NEW cov: 12429 ft: 14305 corp: 17/541b lim: 35 exec/s: 24 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:06:00.291 [2024-10-05 17:55:21.597119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.597144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.597204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.597235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.597290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.597305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.597359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.597375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.291 #25 NEW cov: 12429 ft: 14342 corp: 18/574b lim: 35 exec/s: 25 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:06:00.291 [2024-10-05 17:55:21.637207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.637231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.637311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.637325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.637378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.637391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.637445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff7e00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.637458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.291 #26 NEW cov: 12429 ft: 14386 corp: 19/608b lim: 35 exec/s: 26 rss: 74Mb L: 34/35 MS: 1 InsertByte- 00:06:00.291 [2024-10-05 17:55:21.697103] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.291 [2024-10-05 17:55:21.697450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.697476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.697531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.697546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.697601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.697616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.697669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:69ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.697683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.291 #27 NEW cov: 12429 ft: 14389 corp: 20/642b lim: 35 exec/s: 27 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:06:00.291 [2024-10-05 17:55:21.737475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.737501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.737556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:7f00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.737570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.737628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.737642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.291 [2024-10-05 17:55:21.737695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.291 [2024-10-05 17:55:21.737708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.549 #28 NEW cov: 12429 ft: 14403 corp: 21/675b lim: 35 exec/s: 28 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:06:00.549 [2024-10-05 17:55:21.777738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.777763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.777833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.777848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.777900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.777914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.777968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.777981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.778035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.778048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.549 #29 NEW cov: 12429 ft: 14419 corp: 22/710b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:00.549 [2024-10-05 17:55:21.837614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:5d3200ff cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.837640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.837695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.837710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.837763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.837776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.549 #34 NEW cov: 12429 ft: 14935 corp: 23/733b lim: 35 exec/s: 34 rss: 74Mb L: 23/35 MS: 5 CopyPart-ChangeByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:06:00.549 [2024-10-05 17:55:21.877565] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.549 [2024-10-05 17:55:21.877898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.877927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.877985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.877999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.878052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.878067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.878122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.878136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.549 #35 NEW cov: 12429 ft: 14952 corp: 24/766b lim: 35 exec/s: 35 rss: 74Mb L: 33/35 MS: 1 ChangeByte- 00:06:00.549 [2024-10-05 17:55:21.917673] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.549 [2024-10-05 17:55:21.918014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.918040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.918095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.918109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.918161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.918176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.549 [2024-10-05 17:55:21.918235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.549 [2024-10-05 17:55:21.918249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.550 #41 NEW cov: 12429 ft: 14975 corp: 25/800b lim: 35 exec/s: 41 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:00.550 [2024-10-05 17:55:21.978281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.550 [2024-10-05 17:55:21.978307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.550 [2024-10-05 17:55:21.978376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.550 [2024-10-05 17:55:21.978391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.550 [2024-10-05 17:55:21.978445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.550 [2024-10-05 17:55:21.978459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.550 [2024-10-05 17:55:21.978514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.550 [2024-10-05 17:55:21.978531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.550 [2024-10-05 17:55:21.978585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:2800ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.550 [2024-10-05 17:55:21.978599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:00.550 #42 NEW cov: 12429 ft: 14986 corp: 26/835b lim: 35 exec/s: 42 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:00.808 [2024-10-05 17:55:22.017948] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.808 [2024-10-05 17:55:22.018299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.018325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.808 [2024-10-05 17:55:22.018379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.018393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.808 [2024-10-05 17:55:22.018445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.018460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.808 [2024-10-05 17:55:22.018514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.018527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.808 #43 NEW cov: 12429 ft: 15021 corp: 27/868b lim: 35 exec/s: 43 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:06:00.808 [2024-10-05 17:55:22.058068] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.808 [2024-10-05 17:55:22.058421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.058448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.808 [2024-10-05 17:55:22.058503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:010046ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.808 [2024-10-05 17:55:22.058518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.808 [2024-10-05 17:55:22.058570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.058586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.058637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.058651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.809 #44 NEW cov: 12429 ft: 15084 corp: 28/902b lim: 35 exec/s: 44 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:06:00.809 [2024-10-05 17:55:22.098529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.098554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.098612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0100ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.098626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.098680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff0000ff cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.098694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.098744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.098757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.809 #45 NEW cov: 12429 ft: 15122 corp: 29/936b lim: 35 exec/s: 45 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:06:00.809 [2024-10-05 17:55:22.138621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.138646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.138701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff006a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.138715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.138766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.138796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.138847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.138861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.809 #46 NEW cov: 12429 ft: 15130 corp: 30/965b lim: 35 exec/s: 46 rss: 75Mb L: 29/35 MS: 1 ChangeByte- 00:06:00.809 [2024-10-05 17:55:22.198455] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:00.809 [2024-10-05 17:55:22.198784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ff26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.198810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.198866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.198880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.198931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.198945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.198998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.199012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.809 #47 NEW cov: 12429 ft: 15144 corp: 31/998b lim: 35 exec/s: 47 rss: 75Mb L: 33/35 MS: 1 ChangeByte- 00:06:00.809 [2024-10-05 17:55:22.238954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.238979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.239036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fffe00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.239050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.239106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.239120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.239174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.239191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:00.809 [2024-10-05 17:55:22.239246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:00.809 [2024-10-05 17:55:22.239260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:01.067 #48 NEW cov: 12429 ft: 15166 corp: 32/1033b lim: 35 exec/s: 48 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:06:01.067 [2024-10-05 17:55:22.299022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.299048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.299104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.299118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.299203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.299218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.299273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.299286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.067 #49 NEW cov: 12429 ft: 15171 corp: 33/1064b lim: 35 exec/s: 49 rss: 75Mb L: 31/35 MS: 1 EraseBytes- 00:06:01.067 [2024-10-05 17:55:22.339129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.339154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.339211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.339226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.339282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000040 cdw11:ff000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.339311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.339367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.339381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.067 #50 NEW cov: 12429 ft: 15175 corp: 34/1097b lim: 35 exec/s: 50 rss: 75Mb L: 33/35 MS: 1 ChangeBit- 00:06:01.067 [2024-10-05 17:55:22.379114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6666005d cdw11:6600ff32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.379140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.379199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.379214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.067 [2024-10-05 17:55:22.379270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.379284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.067 #51 NEW cov: 12429 ft: 15191 corp: 35/1120b lim: 35 exec/s: 51 rss: 75Mb L: 23/35 MS: 1 ShuffleBytes- 00:06:01.067 [2024-10-05 17:55:22.439400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.067 [2024-10-05 17:55:22.439425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.068 [2024-10-05 17:55:22.439483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff006a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.068 [2024-10-05 17:55:22.439497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.068 [2024-10-05 17:55:22.439550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.068 [2024-10-05 17:55:22.439581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:01.068 [2024-10-05 17:55:22.439634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00fffb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:01.068 [2024-10-05 17:55:22.439648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:01.068 #52 NEW cov: 12429 ft: 15202 corp: 36/1149b lim: 35 exec/s: 26 rss: 75Mb L: 29/35 MS: 1 ChangeBit- 00:06:01.068 #52 DONE cov: 12429 ft: 15202 corp: 36/1149b lim: 35 exec/s: 26 rss: 75Mb 00:06:01.068 ###### Recommended dictionary. ###### 00:06:01.068 "\377\377\377\377\377\377\377\377" # Uses: 0 00:06:01.068 ###### End of recommended dictionary. ###### 00:06:01.068 Done 52 runs in 2 second(s) 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:01.326 17:55:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:01.326 [2024-10-05 17:55:22.649433] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:01.326 [2024-10-05 17:55:22.649502] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478571 ] 00:06:01.583 [2024-10-05 17:55:22.824150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.583 [2024-10-05 17:55:22.890096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.583 [2024-10-05 17:55:22.948793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.583 [2024-10-05 17:55:22.965151] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:01.583 INFO: Running with entropic power schedule (0xFF, 100). 00:06:01.583 INFO: Seed: 1493799781 00:06:01.584 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:01.584 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:01.584 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:01.584 INFO: A corpus is not provided, starting from an empty corpus 00:06:01.584 #2 INITED exec/s: 0 rss: 65Mb 00:06:01.584 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:01.584 This may also happen if the target rejected all inputs we tried so far 00:06:02.097 NEW_FUNC[1/700]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:02.097 NEW_FUNC[2/700]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:02.098 #6 NEW cov: 12079 ft: 12075 corp: 2/14b lim: 20 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 ChangeBit-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:02.098 NEW_FUNC[1/3]: 0x1776d98 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3946 00:06:02.098 NEW_FUNC[2/3]: 0x1957388 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1602 00:06:02.098 #7 NEW cov: 12216 ft: 12790 corp: 3/27b lim: 20 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:06:02.098 [2024-10-05 17:55:23.425860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.098 [2024-10-05 17:55:23.425905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.098 NEW_FUNC[1/17]: 0x132df88 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:06:02.098 NEW_FUNC[2/17]: 0x132eb08 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:06:02.098 #8 NEW cov: 12468 ft: 13474 corp: 4/36b lim: 20 exec/s: 0 rss: 73Mb L: 9/13 MS: 1 CMP- DE: "\372\003\000\000\000\000\000\000"- 00:06:02.098 #9 NEW cov: 12553 ft: 13800 corp: 5/45b lim: 20 exec/s: 0 rss: 73Mb L: 9/13 MS: 1 CrossOver- 00:06:02.355 #10 NEW cov: 12553 ft: 13900 corp: 6/55b lim: 20 exec/s: 0 rss: 73Mb L: 10/13 MS: 1 EraseBytes- 00:06:02.355 [2024-10-05 17:55:23.606920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.355 [2024-10-05 17:55:23.606956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.355 #11 NEW cov: 12570 ft: 14203 corp: 7/73b lim: 20 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:02.355 #12 NEW cov: 12570 ft: 14288 corp: 8/90b lim: 20 exec/s: 0 rss: 73Mb L: 17/18 MS: 1 InsertRepeatedBytes- 00:06:02.355 #13 NEW cov: 12570 ft: 14312 corp: 9/105b lim: 20 exec/s: 0 rss: 73Mb L: 15/18 MS: 1 CopyPart- 00:06:02.355 [2024-10-05 17:55:23.756847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.355 [2024-10-05 17:55:23.756882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.355 #14 NEW cov: 12570 ft: 14378 corp: 10/114b lim: 20 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 PersAutoDict- DE: "\372\003\000\000\000\000\000\000"- 00:06:02.618 #15 NEW cov: 12570 ft: 14427 corp: 11/127b lim: 20 exec/s: 0 rss: 73Mb L: 13/18 MS: 1 ChangeBinInt- 00:06:02.618 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:02.618 #16 NEW cov: 12593 ft: 14464 corp: 12/144b lim: 20 exec/s: 0 rss: 73Mb L: 17/18 MS: 1 ChangeByte- 00:06:02.618 #17 NEW cov: 12593 ft: 14516 corp: 13/152b lim: 20 exec/s: 0 rss: 73Mb L: 8/18 MS: 1 EraseBytes- 00:06:02.618 #18 NEW cov: 12593 ft: 14588 corp: 14/165b lim: 20 exec/s: 18 rss: 73Mb L: 13/18 MS: 1 CopyPart- 00:06:02.923 #19 NEW cov: 12593 ft: 14648 corp: 15/178b lim: 20 exec/s: 19 rss: 73Mb L: 13/18 MS: 1 ShuffleBytes- 00:06:02.923 #20 NEW cov: 12593 ft: 14723 corp: 16/197b lim: 20 exec/s: 20 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:06:02.923 NEW_FUNC[1/2]: 0x14a4368 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:06:02.923 NEW_FUNC[2/2]: 0x14cbde8 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3702 00:06:02.923 #21 NEW cov: 12649 ft: 14828 corp: 17/206b lim: 20 exec/s: 21 rss: 74Mb L: 9/19 MS: 1 CopyPart- 00:06:02.924 #22 NEW cov: 12649 ft: 14919 corp: 18/220b lim: 20 exec/s: 22 rss: 74Mb L: 14/19 MS: 1 InsertByte- 00:06:02.924 #23 NEW cov: 12649 ft: 14968 corp: 19/238b lim: 20 exec/s: 23 rss: 74Mb L: 18/19 MS: 1 CrossOver- 00:06:03.190 #29 NEW cov: 12649 ft: 15037 corp: 20/251b lim: 20 exec/s: 29 rss: 74Mb L: 13/19 MS: 1 ShuffleBytes- 00:06:03.190 #30 NEW cov: 12649 ft: 15058 corp: 21/264b lim: 20 exec/s: 30 rss: 74Mb L: 13/19 MS: 1 ChangeBit- 00:06:03.190 #31 NEW cov: 12649 ft: 15082 corp: 22/279b lim: 20 exec/s: 31 rss: 74Mb L: 15/19 MS: 1 InsertByte- 00:06:03.190 #32 NEW cov: 12649 ft: 15099 corp: 23/287b lim: 20 exec/s: 32 rss: 74Mb L: 8/19 MS: 1 EraseBytes- 00:06:03.191 [2024-10-05 17:55:24.589461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:03.191 [2024-10-05 17:55:24.589499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.191 #33 NEW cov: 12649 ft: 15118 corp: 24/296b lim: 20 exec/s: 33 rss: 74Mb L: 9/19 MS: 1 ChangeBinInt- 00:06:03.448 #34 NEW cov: 12649 ft: 15145 corp: 25/316b lim: 20 exec/s: 34 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:06:03.448 #35 NEW cov: 12649 ft: 15164 corp: 26/334b lim: 20 exec/s: 35 rss: 74Mb L: 18/20 MS: 1 CopyPart- 00:06:03.448 [2024-10-05 17:55:24.779956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:03.448 [2024-10-05 17:55:24.779989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.448 #39 NEW cov: 12649 ft: 15262 corp: 27/343b lim: 20 exec/s: 39 rss: 74Mb L: 9/20 MS: 4 ChangeByte-ShuffleBytes-ChangeBit-CMP- DE: "\001\004\000\000\000\000\000\000"- 00:06:03.448 #40 NEW cov: 12649 ft: 15275 corp: 28/358b lim: 20 exec/s: 40 rss: 74Mb L: 15/20 MS: 1 ChangeByte- 00:06:03.705 #41 NEW cov: 12649 ft: 15304 corp: 29/371b lim: 20 exec/s: 41 rss: 74Mb L: 13/20 MS: 1 ShuffleBytes- 00:06:03.705 [2024-10-05 17:55:24.950909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:03.705 [2024-10-05 17:55:24.950944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.705 #42 NEW cov: 12649 ft: 15312 corp: 30/390b lim: 20 exec/s: 42 rss: 74Mb L: 19/20 MS: 1 InsertRepeatedBytes- 00:06:03.705 [2024-10-05 17:55:25.001058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:03.705 [2024-10-05 17:55:25.001088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:03.705 #43 NEW cov: 12649 ft: 15318 corp: 31/407b lim: 20 exec/s: 21 rss: 74Mb L: 17/20 MS: 1 PersAutoDict- DE: "\001\004\000\000\000\000\000\000"- 00:06:03.705 #43 DONE cov: 12649 ft: 15318 corp: 31/407b lim: 20 exec/s: 21 rss: 74Mb 00:06:03.705 ###### Recommended dictionary. ###### 00:06:03.705 "\372\003\000\000\000\000\000\000" # Uses: 1 00:06:03.705 "\001\004\000\000\000\000\000\000" # Uses: 1 00:06:03.705 ###### End of recommended dictionary. ###### 00:06:03.705 Done 43 runs in 2 second(s) 00:06:03.705 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:03.963 17:55:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:03.963 [2024-10-05 17:55:25.216444] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:03.963 [2024-10-05 17:55:25.216514] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478998 ] 00:06:03.963 [2024-10-05 17:55:25.393835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.221 [2024-10-05 17:55:25.463635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.221 [2024-10-05 17:55:25.522659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.221 [2024-10-05 17:55:25.538999] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:04.221 INFO: Running with entropic power schedule (0xFF, 100). 00:06:04.221 INFO: Seed: 4069816725 00:06:04.221 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:04.221 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:04.221 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:04.221 INFO: A corpus is not provided, starting from an empty corpus 00:06:04.221 #2 INITED exec/s: 0 rss: 65Mb 00:06:04.221 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:04.221 This may also happen if the target rejected all inputs we tried so far 00:06:04.221 [2024-10-05 17:55:25.594377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.221 [2024-10-05 17:55:25.594406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.478 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:04.478 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:04.478 #6 NEW cov: 12212 ft: 12193 corp: 2/12b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 ChangeBit-InsertByte-CrossOver-CMP- DE: "\377\377\377\377\377\377\377\012"- 00:06:04.478 [2024-10-05 17:55:25.915560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.478 [2024-10-05 17:55:25.915591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.478 [2024-10-05 17:55:25.915648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.478 [2024-10-05 17:55:25.915662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.478 [2024-10-05 17:55:25.915717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.478 [2024-10-05 17:55:25.915731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.735 #12 NEW cov: 12325 ft: 13426 corp: 3/38b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:04.735 [2024-10-05 17:55:25.975491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:25.975517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.735 [2024-10-05 17:55:25.975572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:25.975589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.735 #13 NEW cov: 12331 ft: 13872 corp: 4/57b lim: 35 exec/s: 0 rss: 73Mb L: 19/26 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\012"- 00:06:04.735 [2024-10-05 17:55:26.015403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:26.015428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.735 #16 NEW cov: 12416 ft: 14299 corp: 5/66b lim: 35 exec/s: 0 rss: 73Mb L: 9/26 MS: 3 ChangeByte-ChangeBinInt-PersAutoDict- DE: "\377\377\377\377\377\377\377\012"- 00:06:04.735 [2024-10-05 17:55:26.055832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:26.055857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.735 [2024-10-05 17:55:26.055912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:26.055925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.735 [2024-10-05 17:55:26.055979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.735 [2024-10-05 17:55:26.055992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.735 #17 NEW cov: 12416 ft: 14450 corp: 6/87b lim: 35 exec/s: 0 rss: 73Mb L: 21/26 MS: 1 EraseBytes- 00:06:04.736 [2024-10-05 17:55:26.115697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.736 [2024-10-05 17:55:26.115722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.736 #18 NEW cov: 12416 ft: 14491 corp: 7/98b lim: 35 exec/s: 0 rss: 73Mb L: 11/26 MS: 1 ChangeByte- 00:06:04.736 [2024-10-05 17:55:26.155954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.736 [2024-10-05 17:55:26.155979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.736 [2024-10-05 17:55:26.156035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:40ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.736 [2024-10-05 17:55:26.156049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.992 #19 NEW cov: 12416 ft: 14547 corp: 8/117b lim: 35 exec/s: 0 rss: 73Mb L: 19/26 MS: 1 ChangeByte- 00:06:04.992 [2024-10-05 17:55:26.216141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.216166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.992 [2024-10-05 17:55:26.216226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.216240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.992 #20 NEW cov: 12416 ft: 14633 corp: 9/135b lim: 35 exec/s: 0 rss: 74Mb L: 18/26 MS: 1 EraseBytes- 00:06:04.992 [2024-10-05 17:55:26.276160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.276193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.992 #21 NEW cov: 12416 ft: 14668 corp: 10/146b lim: 35 exec/s: 0 rss: 74Mb L: 11/26 MS: 1 ChangeBinInt- 00:06:04.992 [2024-10-05 17:55:26.336310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.336334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.992 #22 NEW cov: 12416 ft: 14722 corp: 11/153b lim: 35 exec/s: 0 rss: 74Mb L: 7/26 MS: 1 InsertRepeatedBytes- 00:06:04.992 [2024-10-05 17:55:26.376440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.376464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.992 #23 NEW cov: 12416 ft: 14751 corp: 12/164b lim: 35 exec/s: 0 rss: 74Mb L: 11/26 MS: 1 CopyPart- 00:06:04.992 [2024-10-05 17:55:26.436771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.436797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.992 [2024-10-05 17:55:26.436854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:40ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:04.992 [2024-10-05 17:55:26.436867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.249 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:05.249 #24 NEW cov: 12439 ft: 14771 corp: 13/183b lim: 35 exec/s: 0 rss: 74Mb L: 19/26 MS: 1 ChangeByte- 00:06:05.249 [2024-10-05 17:55:26.496829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.496856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.249 #25 NEW cov: 12439 ft: 14854 corp: 14/194b lim: 35 exec/s: 0 rss: 74Mb L: 11/26 MS: 1 ShuffleBytes- 00:06:05.249 [2024-10-05 17:55:26.557412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.557438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.557496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.557510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.557565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.557578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.557631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.557644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.249 #26 NEW cov: 12439 ft: 15260 corp: 15/228b lim: 35 exec/s: 26 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:05.249 [2024-10-05 17:55:26.597712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.597738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.597797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ceff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.597811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.597868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9898ff98 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.597881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.597937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.597950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.249 [2024-10-05 17:55:26.598006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffff9898 cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.598020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:05.249 #27 NEW cov: 12439 ft: 15325 corp: 16/263b lim: 35 exec/s: 27 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:05.249 [2024-10-05 17:55:26.657219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a40 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.657245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.249 #28 NEW cov: 12439 ft: 15341 corp: 17/274b lim: 35 exec/s: 28 rss: 74Mb L: 11/35 MS: 1 EraseBytes- 00:06:05.249 [2024-10-05 17:55:26.697319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a40 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.249 [2024-10-05 17:55:26.697345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.757499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a40 cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.757525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 #30 NEW cov: 12439 ft: 15385 corp: 18/285b lim: 35 exec/s: 30 rss: 74Mb L: 11/35 MS: 2 ShuffleBytes-CrossOver- 00:06:05.506 [2024-10-05 17:55:26.798083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.798108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.798182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.798202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.798257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.798270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.798329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.798343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.506 #31 NEW cov: 12439 ft: 15414 corp: 19/319b lim: 35 exec/s: 31 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:06:05.506 [2024-10-05 17:55:26.838198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.838224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.838278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.838292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.838344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.838358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.838412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff23ffff cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.838425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.506 #32 NEW cov: 12439 ft: 15448 corp: 20/347b lim: 35 exec/s: 32 rss: 74Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:05.506 [2024-10-05 17:55:26.898207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff418a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.898233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.898305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.898319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.898375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.898389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.506 #33 NEW cov: 12439 ft: 15468 corp: 21/373b lim: 35 exec/s: 33 rss: 74Mb L: 26/35 MS: 1 ChangeByte- 00:06:05.506 [2024-10-05 17:55:26.938364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.938390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.938448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.938462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.506 [2024-10-05 17:55:26.938516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff40ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.506 [2024-10-05 17:55:26.938529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.506 #34 NEW cov: 12439 ft: 15486 corp: 22/400b lim: 35 exec/s: 34 rss: 74Mb L: 27/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\012"- 00:06:05.763 [2024-10-05 17:55:26.978446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff418a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:26.978472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:26.978529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:26.978542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:26.978597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:26.978610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.763 #35 NEW cov: 12439 ft: 15502 corp: 23/426b lim: 35 exec/s: 35 rss: 75Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:05.763 [2024-10-05 17:55:27.038937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.038963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:27.039018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff21 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.039031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:27.039086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.039116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:27.039174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.039191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:27.039244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.039258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:05.763 #36 NEW cov: 12439 ft: 15523 corp: 24/461b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:06:05.763 [2024-10-05 17:55:27.098473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a1a cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.098498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.763 #37 NEW cov: 12439 ft: 15531 corp: 25/472b lim: 35 exec/s: 37 rss: 75Mb L: 11/35 MS: 1 ChangeBit- 00:06:05.763 [2024-10-05 17:55:27.158794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:98988a0a cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.158819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.763 [2024-10-05 17:55:27.158873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.158890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.763 #38 NEW cov: 12439 ft: 15560 corp: 26/491b lim: 35 exec/s: 38 rss: 75Mb L: 19/35 MS: 1 EraseBytes- 00:06:05.763 [2024-10-05 17:55:27.218831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff820aae cdw11:ff0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:05.763 [2024-10-05 17:55:27.218856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 #42 NEW cov: 12439 ft: 15574 corp: 27/498b lim: 35 exec/s: 42 rss: 75Mb L: 7/35 MS: 4 EraseBytes-ChangeByte-CopyPart-InsertByte- 00:06:06.020 [2024-10-05 17:55:27.258872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff0a8aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.258897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 #43 NEW cov: 12439 ft: 15613 corp: 28/511b lim: 35 exec/s: 43 rss: 75Mb L: 13/35 MS: 1 EraseBytes- 00:06:06.020 [2024-10-05 17:55:27.299017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a40 cdw11:40ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.299041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 #44 NEW cov: 12439 ft: 15617 corp: 29/523b lim: 35 exec/s: 44 rss: 75Mb L: 12/35 MS: 1 InsertByte- 00:06:06.020 [2024-10-05 17:55:27.359351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:98988a0a cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.359376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 [2024-10-05 17:55:27.359432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.359445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.020 #45 NEW cov: 12439 ft: 15634 corp: 30/542b lim: 35 exec/s: 45 rss: 75Mb L: 19/35 MS: 1 CopyPart- 00:06:06.020 [2024-10-05 17:55:27.419509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c3988a0a cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.419534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 [2024-10-05 17:55:27.419590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:98989898 cdw11:98980001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.419603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.020 #46 NEW cov: 12439 ft: 15646 corp: 31/561b lim: 35 exec/s: 46 rss: 75Mb L: 19/35 MS: 1 ChangeByte- 00:06:06.020 [2024-10-05 17:55:27.479898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.479923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.020 [2024-10-05 17:55:27.479996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.480010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.020 [2024-10-05 17:55:27.480065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.020 [2024-10-05 17:55:27.480081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.277 #47 NEW cov: 12439 ft: 15687 corp: 32/582b lim: 35 exec/s: 47 rss: 75Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:06:06.277 [2024-10-05 17:55:27.520083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.520108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.520164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.520178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.520235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.520249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.520302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:40ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.520315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.277 #48 NEW cov: 12439 ft: 15696 corp: 33/615b lim: 35 exec/s: 48 rss: 75Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:06.277 [2024-10-05 17:55:27.580288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff8a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.580313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.580370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.580383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.580437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.580467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.277 [2024-10-05 17:55:27.580522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.277 [2024-10-05 17:55:27.580535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.277 #49 NEW cov: 12439 ft: 15708 corp: 34/649b lim: 35 exec/s: 24 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:06:06.277 #49 DONE cov: 12439 ft: 15708 corp: 34/649b lim: 35 exec/s: 24 rss: 75Mb 00:06:06.277 ###### Recommended dictionary. ###### 00:06:06.277 "\377\377\377\377\377\377\377\012" # Uses: 3 00:06:06.277 ###### End of recommended dictionary. ###### 00:06:06.277 Done 49 runs in 2 second(s) 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:06.277 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:06.534 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:06.534 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:06.534 17:55:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:06.534 [2024-10-05 17:55:27.772296] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:06.534 [2024-10-05 17:55:27.772367] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479395 ] 00:06:06.534 [2024-10-05 17:55:27.954049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.791 [2024-10-05 17:55:28.019736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.791 [2024-10-05 17:55:28.079242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.791 [2024-10-05 17:55:28.095595] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:06.791 INFO: Running with entropic power schedule (0xFF, 100). 00:06:06.791 INFO: Seed: 2331856507 00:06:06.791 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:06.791 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:06.791 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:06.791 INFO: A corpus is not provided, starting from an empty corpus 00:06:06.791 #2 INITED exec/s: 0 rss: 66Mb 00:06:06.791 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:06.791 This may also happen if the target rejected all inputs we tried so far 00:06:06.791 [2024-10-05 17:55:28.171827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:06.791 [2024-10-05 17:55:28.171865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.050 NEW_FUNC[1/714]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:07.050 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:07.050 #4 NEW cov: 12216 ft: 12211 corp: 2/12b lim: 45 exec/s: 0 rss: 73Mb L: 11/11 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:06:07.308 [2024-10-05 17:55:28.513416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.513464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.513600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a908a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.513619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.513745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.513772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.308 NEW_FUNC[1/1]: 0x1017448 in posix_sock_readv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1578 00:06:07.308 #5 NEW cov: 12336 ft: 13657 corp: 3/43b lim: 45 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:07.308 [2024-10-05 17:55:28.583104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.583135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.583254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3694007f cdw11:0b390006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.583271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.308 #6 NEW cov: 12342 ft: 14137 corp: 4/62b lim: 45 exec/s: 0 rss: 73Mb L: 19/31 MS: 1 CMP- DE: "\001\000\1776\224\0139\302"- 00:06:07.308 [2024-10-05 17:55:28.633043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:007f9001 cdw11:36940000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.633072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.308 #7 NEW cov: 12427 ft: 14373 corp: 5/74b lim: 45 exec/s: 0 rss: 73Mb L: 12/31 MS: 1 EraseBytes- 00:06:07.308 [2024-10-05 17:55:28.704154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.704184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.704316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.704334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.704452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.704470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.308 [2024-10-05 17:55:28.704598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.308 [2024-10-05 17:55:28.704616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.308 #8 NEW cov: 12427 ft: 14818 corp: 6/111b lim: 45 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:07.566 [2024-10-05 17:55:28.774459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.774487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.774626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.774644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.774760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.774777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.774896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.774912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.566 #9 NEW cov: 12427 ft: 14948 corp: 7/147b lim: 45 exec/s: 0 rss: 74Mb L: 36/37 MS: 1 EraseBytes- 00:06:07.566 [2024-10-05 17:55:28.844248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.844275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.844402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.844419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.844547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:940b39c2 cdw11:39c20004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.844564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.566 #10 NEW cov: 12427 ft: 14995 corp: 8/174b lim: 45 exec/s: 0 rss: 74Mb L: 27/37 MS: 1 PersAutoDict- DE: "\001\000\1776\224\0139\302"- 00:06:07.566 [2024-10-05 17:55:28.894695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.894725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.894863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.894880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.895002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.895021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.895147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.895165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.566 #11 NEW cov: 12427 ft: 15040 corp: 9/210b lim: 45 exec/s: 0 rss: 74Mb L: 36/37 MS: 1 ChangeByte- 00:06:07.566 [2024-10-05 17:55:28.964610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.964638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.964768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.964784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.566 [2024-10-05 17:55:28.964904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:6bf4c73d cdw11:c63d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.566 [2024-10-05 17:55:28.964921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.566 #12 NEW cov: 12427 ft: 15062 corp: 10/237b lim: 45 exec/s: 0 rss: 74Mb L: 27/37 MS: 1 ChangeBinInt- 00:06:07.824 [2024-10-05 17:55:29.034792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.034818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.034947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.034964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.035088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:007fc701 cdw11:36940000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.035106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.824 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:07.824 #13 NEW cov: 12450 ft: 15115 corp: 11/264b lim: 45 exec/s: 0 rss: 74Mb L: 27/37 MS: 1 PersAutoDict- DE: "\001\000\1776\224\0139\302"- 00:06:07.824 [2024-10-05 17:55:29.105118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90949090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.105145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.105262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.105280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.105398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:6bf4c73d cdw11:c63d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.105414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.824 #14 NEW cov: 12450 ft: 15164 corp: 12/291b lim: 45 exec/s: 0 rss: 74Mb L: 27/37 MS: 1 ChangeBit- 00:06:07.824 [2024-10-05 17:55:29.155464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.155493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.155622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.155641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.155762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c2948a39 cdw11:0b390006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.155783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.155911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8aff8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.155929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.824 #15 NEW cov: 12450 ft: 15202 corp: 13/327b lim: 45 exec/s: 15 rss: 74Mb L: 36/37 MS: 1 CrossOver- 00:06:07.824 [2024-10-05 17:55:29.204765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:007f9001 cdw11:36940000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.204793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.824 #16 NEW cov: 12450 ft: 15301 corp: 14/339b lim: 45 exec/s: 16 rss: 74Mb L: 12/37 MS: 1 ChangeByte- 00:06:07.824 [2024-10-05 17:55:29.275251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.275279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.824 [2024-10-05 17:55:29.275410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3694007f cdw11:0b390006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:07.824 [2024-10-05 17:55:29.275427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.082 #17 NEW cov: 12450 ft: 15343 corp: 15/359b lim: 45 exec/s: 17 rss: 74Mb L: 20/37 MS: 1 InsertByte- 00:06:08.082 [2024-10-05 17:55:29.325701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.325730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.082 [2024-10-05 17:55:29.325861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01007f36 cdw11:7f7f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.325879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.082 [2024-10-05 17:55:29.326007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:940b39c2 cdw11:39c20004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.326024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.082 #18 NEW cov: 12450 ft: 15371 corp: 16/386b lim: 45 exec/s: 18 rss: 74Mb L: 27/37 MS: 1 CopyPart- 00:06:08.082 [2024-10-05 17:55:29.375281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.375308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.082 #19 NEW cov: 12450 ft: 15390 corp: 17/397b lim: 45 exec/s: 19 rss: 74Mb L: 11/37 MS: 1 ChangeByte- 00:06:08.082 [2024-10-05 17:55:29.426549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90949090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.426578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.082 [2024-10-05 17:55:29.426711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7f360100 cdw11:01000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.426729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.082 [2024-10-05 17:55:29.426853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3d6b0bc7 cdw11:f4c60004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.082 [2024-10-05 17:55:29.426872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.082 [2024-10-05 17:55:29.426993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.083 [2024-10-05 17:55:29.427011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.083 [2024-10-05 17:55:29.427135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:6bf4c73d cdw11:c63d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.083 [2024-10-05 17:55:29.427151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.083 #20 NEW cov: 12450 ft: 15481 corp: 18/442b lim: 45 exec/s: 20 rss: 74Mb L: 45/45 MS: 1 CopyPart- 00:06:08.083 [2024-10-05 17:55:29.496256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.083 [2024-10-05 17:55:29.496283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.083 [2024-10-05 17:55:29.496406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3601007f cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.083 [2024-10-05 17:55:29.496424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.083 [2024-10-05 17:55:29.496548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:940b39c2 cdw11:39c20004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.083 [2024-10-05 17:55:29.496568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.083 #21 NEW cov: 12450 ft: 15501 corp: 19/469b lim: 45 exec/s: 21 rss: 74Mb L: 27/45 MS: 1 ChangeByte- 00:06:08.341 [2024-10-05 17:55:29.546433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.546464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.546592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a908a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.546612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.546735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.546750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.341 #22 NEW cov: 12450 ft: 15525 corp: 20/500b lim: 45 exec/s: 22 rss: 74Mb L: 31/45 MS: 1 ChangeBinInt- 00:06:08.341 [2024-10-05 17:55:29.597032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90949090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.597060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.597192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7f360100 cdw11:01000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.597208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.597345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0bc73694 cdw11:3d6b0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.597379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.597506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:007f9001 cdw11:36010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.597525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.597654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:c73d940b cdw11:c63d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.597671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:08.341 #23 NEW cov: 12450 ft: 15540 corp: 21/545b lim: 45 exec/s: 23 rss: 74Mb L: 45/45 MS: 1 CopyPart- 00:06:08.341 [2024-10-05 17:55:29.666482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90019090 cdw11:007f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.666511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.666640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:940b7f36 cdw11:39c20004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.666656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.341 #24 NEW cov: 12450 ft: 15551 corp: 22/567b lim: 45 exec/s: 24 rss: 74Mb L: 22/45 MS: 1 EraseBytes- 00:06:08.341 [2024-10-05 17:55:29.716340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:007f9001 cdw11:36940000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.716369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.341 #25 NEW cov: 12450 ft: 15602 corp: 23/579b lim: 45 exec/s: 25 rss: 74Mb L: 12/45 MS: 1 ChangeByte- 00:06:08.341 [2024-10-05 17:55:29.767372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.767401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.767522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.767538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.767657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.767674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.341 [2024-10-05 17:55:29.767798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.341 [2024-10-05 17:55:29.767814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.341 #26 NEW cov: 12450 ft: 15610 corp: 24/615b lim: 45 exec/s: 26 rss: 74Mb L: 36/45 MS: 1 ChangeByte- 00:06:08.599 [2024-10-05 17:55:29.817239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.817272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.817398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01007f00 cdw11:7f7f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.817414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.817532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:940b39c2 cdw11:39c20004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.817551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.599 #27 NEW cov: 12450 ft: 15639 corp: 25/642b lim: 45 exec/s: 27 rss: 75Mb L: 27/45 MS: 1 ChangeByte- 00:06:08.599 [2024-10-05 17:55:29.887212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909074 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.887240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.887361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3694007f cdw11:0b390006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.887378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.599 #28 NEW cov: 12450 ft: 15681 corp: 26/662b lim: 45 exec/s: 28 rss: 75Mb L: 20/45 MS: 1 ChangeByte- 00:06:08.599 [2024-10-05 17:55:29.957994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:37373737 cdw11:37370004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.958023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.958148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:8a900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.958168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.958300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.958319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.599 [2024-10-05 17:55:29.958439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:29.958457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:08.599 #29 NEW cov: 12450 ft: 15688 corp: 27/698b lim: 45 exec/s: 29 rss: 75Mb L: 36/45 MS: 1 ShuffleBytes- 00:06:08.599 [2024-10-05 17:55:30.027346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909074 cdw11:90900006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.599 [2024-10-05 17:55:30.027376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.858 #30 NEW cov: 12450 ft: 15701 corp: 28/709b lim: 45 exec/s: 30 rss: 75Mb L: 11/45 MS: 1 EraseBytes- 00:06:08.858 [2024-10-05 17:55:30.098125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90480004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.858 [2024-10-05 17:55:30.098158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:08.858 [2024-10-05 17:55:30.098283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0001007f cdw11:007f0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.858 [2024-10-05 17:55:30.098306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:08.858 [2024-10-05 17:55:30.098430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c2940b39 cdw11:0b390006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:08.858 [2024-10-05 17:55:30.098448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:08.858 #31 NEW cov: 12450 ft: 15715 corp: 29/737b lim: 45 exec/s: 15 rss: 75Mb L: 28/45 MS: 1 InsertByte- 00:06:08.858 #31 DONE cov: 12450 ft: 15715 corp: 29/737b lim: 45 exec/s: 15 rss: 75Mb 00:06:08.858 ###### Recommended dictionary. ###### 00:06:08.858 "\001\000\1776\224\0139\302" # Uses: 2 00:06:08.858 ###### End of recommended dictionary. ###### 00:06:08.858 Done 31 runs in 2 second(s) 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:08.858 17:55:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:08.858 [2024-10-05 17:55:30.310666] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:08.858 [2024-10-05 17:55:30.310736] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479930 ] 00:06:09.116 [2024-10-05 17:55:30.497018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.116 [2024-10-05 17:55:30.564302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.373 [2024-10-05 17:55:30.623731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.373 [2024-10-05 17:55:30.640042] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:09.373 INFO: Running with entropic power schedule (0xFF, 100). 00:06:09.373 INFO: Seed: 580882172 00:06:09.373 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:09.373 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:09.373 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:09.373 INFO: A corpus is not provided, starting from an empty corpus 00:06:09.373 #2 INITED exec/s: 0 rss: 64Mb 00:06:09.373 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:09.373 This may also happen if the target rejected all inputs we tried so far 00:06:09.373 [2024-10-05 17:55:30.706533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.373 [2024-10-05 17:55:30.706568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.631 NEW_FUNC[1/712]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:09.631 NEW_FUNC[2/712]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.631 #3 NEW cov: 12137 ft: 12132 corp: 2/3b lim: 10 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:06:09.631 [2024-10-05 17:55:31.037228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.631 [2024-10-05 17:55:31.037270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.631 [2024-10-05 17:55:31.037395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.631 [2024-10-05 17:55:31.037416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.631 NEW_FUNC[1/1]: 0x19732d8 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1159 00:06:09.631 #4 NEW cov: 12253 ft: 13033 corp: 3/7b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:09.631 [2024-10-05 17:55:31.087102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000110a cdw11:00000000 00:06:09.631 [2024-10-05 17:55:31.087132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.889 #7 NEW cov: 12259 ft: 13365 corp: 4/9b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 3 ChangeByte-ChangeByte-CrossOver- 00:06:09.889 [2024-10-05 17:55:31.127199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d1c2 cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.127228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.889 #10 NEW cov: 12344 ft: 13701 corp: 5/11b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 3 ChangeBit-ChangeBinInt-InsertByte- 00:06:09.889 [2024-10-05 17:55:31.177557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.177585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.889 [2024-10-05 17:55:31.177700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.177716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.889 #11 NEW cov: 12344 ft: 13731 corp: 6/15b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:09.889 [2024-10-05 17:55:31.247761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d1c2 cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.247788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.889 [2024-10-05 17:55:31.247902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d1c2 cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.247923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.889 #12 NEW cov: 12344 ft: 13832 corp: 7/19b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:09.889 [2024-10-05 17:55:31.308005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.308032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:09.889 [2024-10-05 17:55:31.308151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:09.889 [2024-10-05 17:55:31.308183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.889 #13 NEW cov: 12344 ft: 13900 corp: 8/23b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CMP- DE: "\001\000"- 00:06:10.147 [2024-10-05 17:55:31.377976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c20a cdw11:00000000 00:06:10.147 [2024-10-05 17:55:31.378004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 #14 NEW cov: 12344 ft: 13955 corp: 9/25b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CrossOver- 00:06:10.147 [2024-10-05 17:55:31.428035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d154 cdw11:00000000 00:06:10.147 [2024-10-05 17:55:31.428064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 #15 NEW cov: 12344 ft: 13997 corp: 10/27b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:10.147 [2024-10-05 17:55:31.478424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d1c2 cdw11:00000000 00:06:10.147 [2024-10-05 17:55:31.478453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 [2024-10-05 17:55:31.478568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d1cc cdw11:00000000 00:06:10.147 [2024-10-05 17:55:31.478587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.147 #16 NEW cov: 12344 ft: 14082 corp: 11/31b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:10.147 [2024-10-05 17:55:31.548434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:10.147 [2024-10-05 17:55:31.548462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.147 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:10.147 #17 NEW cov: 12367 ft: 14161 corp: 12/33b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CMP- DE: "\001\000"- 00:06:10.405 [2024-10-05 17:55:31.618750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d141 cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.618779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.618898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c2d1 cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.618915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.405 #18 NEW cov: 12367 ft: 14186 corp: 13/38b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:06:10.405 [2024-10-05 17:55:31.688815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a41 cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.688844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.405 #19 NEW cov: 12367 ft: 14204 corp: 14/40b lim: 10 exec/s: 19 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:10.405 [2024-10-05 17:55:31.739777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.739804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.739918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.739938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.740049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.740065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.740183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.740203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.740329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.740346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:10.405 #20 NEW cov: 12367 ft: 14515 corp: 15/50b lim: 10 exec/s: 20 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:10.405 [2024-10-05 17:55:31.789092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.789120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.405 #21 NEW cov: 12367 ft: 14580 corp: 16/52b lim: 10 exec/s: 21 rss: 72Mb L: 2/10 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:10.405 [2024-10-05 17:55:31.839433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c2ff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.839460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.405 [2024-10-05 17:55:31.839577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:10.405 [2024-10-05 17:55:31.839595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.405 #22 NEW cov: 12367 ft: 14652 corp: 17/57b lim: 10 exec/s: 22 rss: 72Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:10.663 [2024-10-05 17:55:31.889605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:10.664 [2024-10-05 17:55:31.889631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.664 [2024-10-05 17:55:31.889757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:06:10.664 [2024-10-05 17:55:31.889774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.664 #23 NEW cov: 12367 ft: 14689 corp: 18/61b lim: 10 exec/s: 23 rss: 73Mb L: 4/10 MS: 1 ChangeBinInt- 00:06:10.664 [2024-10-05 17:55:31.959648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b0a cdw11:00000000 00:06:10.664 [2024-10-05 17:55:31.959677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.664 #24 NEW cov: 12367 ft: 14724 corp: 19/63b lim: 10 exec/s: 24 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:10.664 [2024-10-05 17:55:32.010036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:10.664 [2024-10-05 17:55:32.010067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.664 [2024-10-05 17:55:32.010191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:06:10.664 [2024-10-05 17:55:32.010210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.664 #25 NEW cov: 12367 ft: 14736 corp: 20/67b lim: 10 exec/s: 25 rss: 73Mb L: 4/10 MS: 1 ShuffleBytes- 00:06:10.664 [2024-10-05 17:55:32.069999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:10.664 [2024-10-05 17:55:32.070025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.664 #26 NEW cov: 12367 ft: 14759 corp: 21/70b lim: 10 exec/s: 26 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:06:10.664 [2024-10-05 17:55:32.110082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a04 cdw11:00000000 00:06:10.664 [2024-10-05 17:55:32.110108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.922 #27 NEW cov: 12367 ft: 14772 corp: 22/72b lim: 10 exec/s: 27 rss: 73Mb L: 2/10 MS: 1 EraseBytes- 00:06:10.922 [2024-10-05 17:55:32.150413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e6e6 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.150441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.922 [2024-10-05 17:55:32.150551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e60a cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.150578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.922 #28 NEW cov: 12367 ft: 14791 corp: 23/76b lim: 10 exec/s: 28 rss: 73Mb L: 4/10 MS: 1 InsertRepeatedBytes- 00:06:10.922 [2024-10-05 17:55:32.190261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.190289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.922 #29 NEW cov: 12367 ft: 14815 corp: 24/78b lim: 10 exec/s: 29 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:06:10.922 [2024-10-05 17:55:32.250681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.250709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.922 [2024-10-05 17:55:32.250823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.250839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.922 #30 NEW cov: 12367 ft: 14825 corp: 25/82b lim: 10 exec/s: 30 rss: 73Mb L: 4/10 MS: 1 ChangeBit- 00:06:10.922 [2024-10-05 17:55:32.310975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.311001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:10.922 [2024-10-05 17:55:32.311118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.311134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.922 [2024-10-05 17:55:32.311256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.311275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:10.922 #31 NEW cov: 12367 ft: 15002 corp: 26/88b lim: 10 exec/s: 31 rss: 73Mb L: 6/10 MS: 1 CMP- DE: "\037\000\000\000"- 00:06:10.922 [2024-10-05 17:55:32.380855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002611 cdw11:00000000 00:06:10.922 [2024-10-05 17:55:32.380881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.180 #32 NEW cov: 12367 ft: 15011 corp: 27/91b lim: 10 exec/s: 32 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:11.180 [2024-10-05 17:55:32.451824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.451853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.451966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.451992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.452105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000003ff cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.452123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.452242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.452261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.452370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.452387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:11.180 #33 NEW cov: 12367 ft: 15017 corp: 28/101b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:11.180 [2024-10-05 17:55:32.521576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d1c2 cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.521603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.521714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.521729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.180 [2024-10-05 17:55:32.521840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.521855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.180 #34 NEW cov: 12367 ft: 15023 corp: 29/107b lim: 10 exec/s: 34 rss: 73Mb L: 6/10 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:06:11.180 [2024-10-05 17:55:32.571405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.571435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.180 #35 NEW cov: 12367 ft: 15029 corp: 30/109b lim: 10 exec/s: 35 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:11.180 [2024-10-05 17:55:32.621491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d1d1 cdw11:00000000 00:06:11.180 [2024-10-05 17:55:32.621520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.439 #36 NEW cov: 12367 ft: 15129 corp: 31/112b lim: 10 exec/s: 36 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:06:11.439 [2024-10-05 17:55:32.661594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:11.439 [2024-10-05 17:55:32.661621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.439 #38 NEW cov: 12367 ft: 15140 corp: 32/114b lim: 10 exec/s: 19 rss: 73Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:06:11.439 #38 DONE cov: 12367 ft: 15140 corp: 32/114b lim: 10 exec/s: 19 rss: 73Mb 00:06:11.439 ###### Recommended dictionary. ###### 00:06:11.439 "\001\000" # Uses: 1 00:06:11.439 "\037\000\000\000" # Uses: 1 00:06:11.439 ###### End of recommended dictionary. ###### 00:06:11.439 Done 38 runs in 2 second(s) 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.439 17:55:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:11.439 [2024-10-05 17:55:32.854678] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:11.439 [2024-10-05 17:55:32.854767] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480341 ] 00:06:11.698 [2024-10-05 17:55:33.032504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.698 [2024-10-05 17:55:33.101683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.698 [2024-10-05 17:55:33.160335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.956 [2024-10-05 17:55:33.176658] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:11.956 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.956 INFO: Seed: 3115865112 00:06:11.956 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:11.956 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:11.956 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:11.956 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.956 #2 INITED exec/s: 0 rss: 65Mb 00:06:11.956 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.956 This may also happen if the target rejected all inputs we tried so far 00:06:11.956 [2024-10-05 17:55:33.225814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:11.956 [2024-10-05 17:55:33.225842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.213 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:12.213 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:12.213 #3 NEW cov: 12140 ft: 12136 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:06:12.213 [2024-10-05 17:55:33.557056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac6 cdw11:00000000 00:06:12.213 [2024-10-05 17:55:33.557088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.557141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.557155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.557211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.557224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.557293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.557307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.214 #4 NEW cov: 12253 ft: 13064 corp: 3/11b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:12.214 [2024-10-05 17:55:33.617155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.617180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.617255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.617269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.617324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.617338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.617391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004208 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.617405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.214 #7 NEW cov: 12259 ft: 13352 corp: 4/20b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 3 EraseBytes-ShuffleBytes-CMP- DE: "\001i\361\327\255\365B\010"- 00:06:12.214 [2024-10-05 17:55:33.657256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.657281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.657340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.657354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.657405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.657418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.214 [2024-10-05 17:55:33.657469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.214 [2024-10-05 17:55:33.657482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.478 #8 NEW cov: 12344 ft: 13606 corp: 5/29b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:12.478 [2024-10-05 17:55:33.697354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac6 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.697380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.697432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.697446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.697501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.697514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.697566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.697578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.478 #9 NEW cov: 12344 ft: 13676 corp: 6/37b lim: 10 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 ShuffleBytes- 00:06:12.478 [2024-10-05 17:55:33.757525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac7 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.757551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.757605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.757619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.757671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.757684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.757737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.757750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.478 #10 NEW cov: 12344 ft: 13803 corp: 7/45b lim: 10 exec/s: 0 rss: 73Mb L: 8/9 MS: 1 ChangeBit- 00:06:12.478 [2024-10-05 17:55:33.817805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.817830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.817885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.817902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.817954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.817984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.818038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.818051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.478 [2024-10-05 17:55:33.818104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002022 cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.818118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.478 #11 NEW cov: 12344 ft: 13870 corp: 8/55b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:12.478 [2024-10-05 17:55:33.877996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.478 [2024-10-05 17:55:33.878021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.878075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000169 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.878088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.878142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.878155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.878207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.878220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.878270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00004208 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.878283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.479 #12 NEW cov: 12344 ft: 13896 corp: 9/65b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001i\361\327\255\365B\010"- 00:06:12.479 [2024-10-05 17:55:33.918110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.918135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.918193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d769 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.918207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.918261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f101 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.918274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.918328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.918341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.479 [2024-10-05 17:55:33.918397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00004208 cdw11:00000000 00:06:12.479 [2024-10-05 17:55:33.918409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.739 #13 NEW cov: 12344 ft: 13915 corp: 10/75b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:12.739 [2024-10-05 17:55:33.977753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.739 [2024-10-05 17:55:33.977778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.739 #14 NEW cov: 12344 ft: 14018 corp: 11/77b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:12.739 [2024-10-05 17:55:34.018279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.018304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.018358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002020 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.018372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.018425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000020e0 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.018437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.018491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d520 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.018505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.739 #15 NEW cov: 12344 ft: 14040 corp: 12/86b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:12.739 [2024-10-05 17:55:34.057985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acc cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.058011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.739 #16 NEW cov: 12344 ft: 14106 corp: 13/88b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:06:12.739 [2024-10-05 17:55:34.118573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.118599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.118669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000069f1 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.118683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.118737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d7ad cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.118750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.118803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f542 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.118816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.739 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:12.739 #17 NEW cov: 12367 ft: 14226 corp: 14/97b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\001i\361\327\255\365B\010"- 00:06:12.739 [2024-10-05 17:55:34.178705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac6 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.178731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.178784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6eb cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.178797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.178851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.178864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.739 [2024-10-05 17:55:34.178917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:12.739 [2024-10-05 17:55:34.178930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.739 #18 NEW cov: 12367 ft: 14259 corp: 15/105b lim: 10 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 ChangeByte- 00:06:12.997 [2024-10-05 17:55:34.218824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005dc6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.218849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.218904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.218918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.218972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.218986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.219041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.219054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.997 #19 NEW cov: 12367 ft: 14320 corp: 16/113b lim: 10 exec/s: 19 rss: 74Mb L: 8/10 MS: 1 ChangeByte- 00:06:12.997 [2024-10-05 17:55:34.259050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.259076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.259132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d769 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.259145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.259197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f101 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.259211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.259265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008df5 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.259279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.259329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00004208 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.259345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.997 #20 NEW cov: 12367 ft: 14331 corp: 17/123b lim: 10 exec/s: 20 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:06:12.997 [2024-10-05 17:55:34.319097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.319123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.319178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.319196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.319254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.319267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.319322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004208 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.319335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.997 #21 NEW cov: 12367 ft: 14377 corp: 18/132b lim: 10 exec/s: 21 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:06:12.997 [2024-10-05 17:55:34.359106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.359131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.359192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.359206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.359258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.359271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.997 #22 NEW cov: 12367 ft: 14584 corp: 19/138b lim: 10 exec/s: 22 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:06:12.997 [2024-10-05 17:55:34.399342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.399368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.399422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.399436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.399488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.399518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.997 [2024-10-05 17:55:34.399572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 00:06:12.997 [2024-10-05 17:55:34.399586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.997 #23 NEW cov: 12367 ft: 14603 corp: 20/147b lim: 10 exec/s: 23 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:13.255 [2024-10-05 17:55:34.459554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.459584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.459640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.459654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.459711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.459725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.459779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.459792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.255 #24 NEW cov: 12367 ft: 14609 corp: 21/156b lim: 10 exec/s: 24 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:06:13.255 [2024-10-05 17:55:34.519837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005dc6 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.519863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.519918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.519931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.519984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.519997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.520049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c642 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.520062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.520116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000080a cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.520129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.255 #25 NEW cov: 12367 ft: 14640 corp: 22/166b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:06:13.255 [2024-10-05 17:55:34.579593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acc cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.579618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.579691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ac7 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.579705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.255 #26 NEW cov: 12367 ft: 14800 corp: 23/170b lim: 10 exec/s: 26 rss: 74Mb L: 4/10 MS: 1 CrossOver- 00:06:13.255 [2024-10-05 17:55:34.640028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.640053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.640108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.640122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.640180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.640198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.640250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003b08 cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.640262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.255 #27 NEW cov: 12367 ft: 14813 corp: 24/179b lim: 10 exec/s: 27 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:13.255 [2024-10-05 17:55:34.679906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.679931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.255 [2024-10-05 17:55:34.679985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cccc cdw11:00000000 00:06:13.255 [2024-10-05 17:55:34.679998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.255 #28 NEW cov: 12367 ft: 14838 corp: 25/183b lim: 10 exec/s: 28 rss: 74Mb L: 4/10 MS: 1 CopyPart- 00:06:13.514 [2024-10-05 17:55:34.720068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.720100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.720157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d769 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.720170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 #29 NEW cov: 12367 ft: 14847 corp: 26/188b lim: 10 exec/s: 29 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:06:13.514 [2024-10-05 17:55:34.760374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.760399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.760454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.760467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.760522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.760536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.760593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d520 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.760606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.514 #30 NEW cov: 12370 ft: 14999 corp: 27/197b lim: 10 exec/s: 30 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:06:13.514 [2024-10-05 17:55:34.820576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac7 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.820602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.820657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.820670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.820728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.820741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.820797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.820810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.514 #31 NEW cov: 12370 ft: 15010 corp: 28/205b lim: 10 exec/s: 31 rss: 75Mb L: 8/10 MS: 1 CrossOver- 00:06:13.514 [2024-10-05 17:55:34.880726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.880752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.880808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.880821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.880875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.880888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.880944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004c08 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.880958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.514 #32 NEW cov: 12370 ft: 15016 corp: 29/214b lim: 10 exec/s: 32 rss: 75Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:13.514 [2024-10-05 17:55:34.920605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.920631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.920688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.920702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 #33 NEW cov: 12370 ft: 15037 corp: 30/219b lim: 10 exec/s: 33 rss: 75Mb L: 5/10 MS: 1 EraseBytes- 00:06:13.514 [2024-10-05 17:55:34.960942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac1 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.960967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.961022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6eb cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.961036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.961087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.961117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.514 [2024-10-05 17:55:34.961171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c60a cdw11:00000000 00:06:13.514 [2024-10-05 17:55:34.961184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.772 #34 NEW cov: 12370 ft: 15060 corp: 31/227b lim: 10 exec/s: 34 rss: 75Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:13.772 [2024-10-05 17:55:35.021285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.021310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.021367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000169 cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.021380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.021433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f130 cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.021447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.021501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000adf5 cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.021514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.021568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00004208 cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.021580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.772 #35 NEW cov: 12370 ft: 15089 corp: 32/237b lim: 10 exec/s: 35 rss: 75Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:13.772 [2024-10-05 17:55:35.061424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.061449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.061504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.772 [2024-10-05 17:55:35.061517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.772 [2024-10-05 17:55:35.061571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000200a cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.061584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.061638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.061651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.061705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.061719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.101512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.101539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.101595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000acc cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.101609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.101663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000cc0a cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.101677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.101735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.101749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.101801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002020 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.101815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.773 #37 NEW cov: 12370 ft: 15103 corp: 33/247b lim: 10 exec/s: 37 rss: 75Mb L: 10/10 MS: 2 CopyPart-CrossOver- 00:06:13.773 [2024-10-05 17:55:35.141598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005dc6 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.141624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.141681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.141695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.141748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000042c6 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.141761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.141815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c6c6 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.141827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.141883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000080a cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.141895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.773 #38 NEW cov: 12370 ft: 15108 corp: 34/257b lim: 10 exec/s: 38 rss: 75Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:13.773 [2024-10-05 17:55:35.201672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000169 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.201697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.201754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f1d7 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.201767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.201822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000049f5 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.201835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.773 [2024-10-05 17:55:35.201888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004c08 cdw11:00000000 00:06:13.773 [2024-10-05 17:55:35.201901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.031 #39 NEW cov: 12370 ft: 15114 corp: 35/266b lim: 10 exec/s: 19 rss: 75Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:14.031 #39 DONE cov: 12370 ft: 15114 corp: 35/266b lim: 10 exec/s: 19 rss: 75Mb 00:06:14.031 ###### Recommended dictionary. ###### 00:06:14.031 "\001i\361\327\255\365B\010" # Uses: 2 00:06:14.031 ###### End of recommended dictionary. ###### 00:06:14.031 Done 39 runs in 2 second(s) 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:14.031 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:14.032 17:55:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:14.032 [2024-10-05 17:55:35.401177] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:14.032 [2024-10-05 17:55:35.401248] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480754 ] 00:06:14.290 [2024-10-05 17:55:35.572690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.290 [2024-10-05 17:55:35.635321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.290 [2024-10-05 17:55:35.694077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.290 [2024-10-05 17:55:35.710459] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:14.290 INFO: Running with entropic power schedule (0xFF, 100). 00:06:14.290 INFO: Seed: 1355911971 00:06:14.290 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:14.290 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:14.290 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:14.290 INFO: A corpus is not provided, starting from an empty corpus 00:06:14.547 [2024-10-05 17:55:35.776550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.547 [2024-10-05 17:55:35.776587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.547 #2 INITED cov: 12168 ft: 12161 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:14.547 [2024-10-05 17:55:35.826668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.547 [2024-10-05 17:55:35.826701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.547 #3 NEW cov: 12281 ft: 12998 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:06:14.547 [2024-10-05 17:55:35.896851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.547 [2024-10-05 17:55:35.896881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.547 #4 NEW cov: 12287 ft: 13120 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:14.547 [2024-10-05 17:55:35.947272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.547 [2024-10-05 17:55:35.947301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.547 [2024-10-05 17:55:35.947429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.547 [2024-10-05 17:55:35.947445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.547 #5 NEW cov: 12372 ft: 13976 corp: 4/5b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:06:14.805 [2024-10-05 17:55:36.017544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.017572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.805 [2024-10-05 17:55:36.017695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.017714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.805 #6 NEW cov: 12372 ft: 14044 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:14.805 [2024-10-05 17:55:36.067691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.067719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.805 [2024-10-05 17:55:36.067848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.067866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.805 #7 NEW cov: 12372 ft: 14203 corp: 6/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:06:14.805 [2024-10-05 17:55:36.137912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.137942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.805 [2024-10-05 17:55:36.138071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.138088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.805 #8 NEW cov: 12372 ft: 14281 corp: 7/11b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:06:14.805 [2024-10-05 17:55:36.188045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.805 [2024-10-05 17:55:36.188072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.806 [2024-10-05 17:55:36.188196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.806 [2024-10-05 17:55:36.188213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.806 #9 NEW cov: 12372 ft: 14390 corp: 8/13b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:14.806 [2024-10-05 17:55:36.258813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.806 [2024-10-05 17:55:36.258839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.806 [2024-10-05 17:55:36.258963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.806 [2024-10-05 17:55:36.258990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.806 [2024-10-05 17:55:36.259108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.806 [2024-10-05 17:55:36.259126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.806 [2024-10-05 17:55:36.259262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.806 [2024-10-05 17:55:36.259280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.063 #10 NEW cov: 12372 ft: 14698 corp: 9/17b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:06:15.064 [2024-10-05 17:55:36.308417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.308444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.064 [2024-10-05 17:55:36.308575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.308592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.064 #11 NEW cov: 12372 ft: 14740 corp: 10/19b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:15.064 [2024-10-05 17:55:36.378585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.378612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.064 [2024-10-05 17:55:36.378729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.378746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.064 #12 NEW cov: 12372 ft: 14824 corp: 11/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CrossOver- 00:06:15.064 [2024-10-05 17:55:36.448810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.448836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.064 [2024-10-05 17:55:36.448970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.448993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.064 #13 NEW cov: 12372 ft: 14855 corp: 12/23b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CrossOver- 00:06:15.064 [2024-10-05 17:55:36.518777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.064 [2024-10-05 17:55:36.518804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.321 #14 NEW cov: 12372 ft: 14905 corp: 13/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeByte- 00:06:15.321 [2024-10-05 17:55:36.569260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.321 [2024-10-05 17:55:36.569288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.321 [2024-10-05 17:55:36.569412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.321 [2024-10-05 17:55:36.569430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.321 #15 NEW cov: 12372 ft: 14917 corp: 14/26b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBinInt- 00:06:15.321 [2024-10-05 17:55:36.639405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.321 [2024-10-05 17:55:36.639431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.321 [2024-10-05 17:55:36.639550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.321 [2024-10-05 17:55:36.639567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.579 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:15.579 #16 NEW cov: 12395 ft: 15032 corp: 15/28b lim: 5 exec/s: 16 rss: 74Mb L: 2/4 MS: 1 ChangeByte- 00:06:15.579 [2024-10-05 17:55:36.960890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:36.960925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:36.961054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:36.961071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:36.961191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:36.961209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:36.961328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:36.961345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.579 #17 NEW cov: 12395 ft: 15090 corp: 16/32b lim: 5 exec/s: 17 rss: 74Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:15.579 [2024-10-05 17:55:37.030984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:37.031015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:37.031145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:37.031162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:37.031287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:37.031306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.579 [2024-10-05 17:55:37.031429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.579 [2024-10-05 17:55:37.031447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.837 #18 NEW cov: 12395 ft: 15120 corp: 17/36b lim: 5 exec/s: 18 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:06:15.837 [2024-10-05 17:55:37.100415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.100445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #19 NEW cov: 12395 ft: 15143 corp: 18/37b lim: 5 exec/s: 19 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:06:15.837 [2024-10-05 17:55:37.150878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.150908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 [2024-10-05 17:55:37.151028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.151047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.837 #20 NEW cov: 12395 ft: 15167 corp: 19/39b lim: 5 exec/s: 20 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:06:15.837 [2024-10-05 17:55:37.200769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.200798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 #21 NEW cov: 12395 ft: 15176 corp: 20/40b lim: 5 exec/s: 21 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:06:15.837 [2024-10-05 17:55:37.251481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.251511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.837 [2024-10-05 17:55:37.251642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.837 [2024-10-05 17:55:37.251659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.837 [2024-10-05 17:55:37.251783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.838 [2024-10-05 17:55:37.251799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.838 #22 NEW cov: 12395 ft: 15337 corp: 21/43b lim: 5 exec/s: 22 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:06:16.096 [2024-10-05 17:55:37.301986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.302015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.302140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.302159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.302281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.302299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.302426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.302444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.096 #23 NEW cov: 12395 ft: 15367 corp: 22/47b lim: 5 exec/s: 23 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:16.096 [2024-10-05 17:55:37.372331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.372360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.372495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.372513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.372645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.372661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.372785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.372804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.372931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.372950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:16.096 #24 NEW cov: 12395 ft: 15427 corp: 23/52b lim: 5 exec/s: 24 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:06:16.096 [2024-10-05 17:55:37.421635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.421663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.421784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.421804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.096 #25 NEW cov: 12395 ft: 15440 corp: 24/54b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:16.096 [2024-10-05 17:55:37.472337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.472364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.472495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.472512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.472637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.472655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.472777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.472794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.096 #26 NEW cov: 12395 ft: 15443 corp: 25/58b lim: 5 exec/s: 26 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:06:16.096 [2024-10-05 17:55:37.542036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.542063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.096 [2024-10-05 17:55:37.542192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.096 [2024-10-05 17:55:37.542212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 #27 NEW cov: 12395 ft: 15448 corp: 26/60b lim: 5 exec/s: 27 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:06:16.354 [2024-10-05 17:55:37.612327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.612355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 [2024-10-05 17:55:37.612472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.612491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 #28 NEW cov: 12395 ft: 15467 corp: 27/62b lim: 5 exec/s: 28 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:06:16.354 [2024-10-05 17:55:37.682571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.682599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 [2024-10-05 17:55:37.682729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.682746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 #29 NEW cov: 12395 ft: 15478 corp: 28/64b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:16.354 [2024-10-05 17:55:37.732367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.732394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 #30 NEW cov: 12395 ft: 15495 corp: 29/65b lim: 5 exec/s: 30 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:06:16.354 [2024-10-05 17:55:37.782701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.782730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.354 [2024-10-05 17:55:37.782858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:16.354 [2024-10-05 17:55:37.782875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.354 #31 NEW cov: 12395 ft: 15500 corp: 30/67b lim: 5 exec/s: 15 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:16.354 #31 DONE cov: 12395 ft: 15500 corp: 30/67b lim: 5 exec/s: 15 rss: 75Mb 00:06:16.354 Done 31 runs in 2 second(s) 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:16.612 17:55:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:16.612 [2024-10-05 17:55:37.972704] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:16.612 [2024-10-05 17:55:37.972776] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481283 ] 00:06:16.870 [2024-10-05 17:55:38.147615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.870 [2024-10-05 17:55:38.212859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.870 [2024-10-05 17:55:38.271489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.870 [2024-10-05 17:55:38.287818] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:16.870 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.870 INFO: Seed: 3933904562 00:06:16.870 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:16.870 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:16.870 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:16.870 INFO: A corpus is not provided, starting from an empty corpus 00:06:17.127 [2024-10-05 17:55:38.343262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.343291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 #2 INITED cov: 12168 ft: 12164 corp: 1/1b exec/s: 0 rss: 72Mb 00:06:17.127 [2024-10-05 17:55:38.383421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.383448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 [2024-10-05 17:55:38.383506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.383520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.127 #3 NEW cov: 12281 ft: 13398 corp: 2/3b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:06:17.127 [2024-10-05 17:55:38.443444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.443469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 #4 NEW cov: 12287 ft: 13585 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:17.127 [2024-10-05 17:55:38.483870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.483895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 [2024-10-05 17:55:38.483954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.483969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.127 [2024-10-05 17:55:38.484027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.484040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.127 #5 NEW cov: 12372 ft: 13993 corp: 4/7b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CrossOver- 00:06:17.127 [2024-10-05 17:55:38.523681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.523707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 #6 NEW cov: 12372 ft: 14181 corp: 5/8b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 CopyPart- 00:06:17.127 [2024-10-05 17:55:38.563920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.563945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.127 [2024-10-05 17:55:38.564003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.127 [2024-10-05 17:55:38.564016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.384 #7 NEW cov: 12372 ft: 14288 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 InsertByte- 00:06:17.384 [2024-10-05 17:55:38.624275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.384 [2024-10-05 17:55:38.624301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.384 [2024-10-05 17:55:38.624360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.384 [2024-10-05 17:55:38.624374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.384 [2024-10-05 17:55:38.624430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.384 [2024-10-05 17:55:38.624444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.384 #8 NEW cov: 12372 ft: 14383 corp: 7/13b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CopyPart- 00:06:17.384 [2024-10-05 17:55:38.684080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.384 [2024-10-05 17:55:38.684105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.384 #9 NEW cov: 12372 ft: 14442 corp: 8/14b lim: 5 exec/s: 0 rss: 73Mb L: 1/3 MS: 1 ChangeByte- 00:06:17.384 [2024-10-05 17:55:38.724530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.384 [2024-10-05 17:55:38.724556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.384 [2024-10-05 17:55:38.724613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.724628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.724684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.724700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.385 #10 NEW cov: 12372 ft: 14477 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 ShuffleBytes- 00:06:17.385 [2024-10-05 17:55:38.764500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.764526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.764584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.764602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.385 #11 NEW cov: 12372 ft: 14555 corp: 10/19b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 InsertByte- 00:06:17.385 [2024-10-05 17:55:38.825129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.825154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.825216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.825230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.825302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.825316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.825373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.825387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.385 [2024-10-05 17:55:38.825444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.385 [2024-10-05 17:55:38.825458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:17.642 #12 NEW cov: 12372 ft: 14894 corp: 11/24b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:17.643 [2024-10-05 17:55:38.884624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.884650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.643 #13 NEW cov: 12372 ft: 14994 corp: 12/25b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:17.643 [2024-10-05 17:55:38.925081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.925106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:38.925165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.925179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:38.925240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.925254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.643 #14 NEW cov: 12372 ft: 15080 corp: 13/28b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:17.643 [2024-10-05 17:55:38.965215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.965241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:38.965303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.965317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:38.965374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:38.965387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.643 #15 NEW cov: 12372 ft: 15094 corp: 14/31b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:17.643 [2024-10-05 17:55:39.025203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:39.025229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:39.025287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:39.025301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.643 #16 NEW cov: 12372 ft: 15116 corp: 15/33b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:17.643 [2024-10-05 17:55:39.085543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:39.085568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:39.085625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:39.085639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.643 [2024-10-05 17:55:39.085714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.643 [2024-10-05 17:55:39.085728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.901 #17 NEW cov: 12372 ft: 15128 corp: 16/36b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:17.901 [2024-10-05 17:55:39.145557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.145583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.901 [2024-10-05 17:55:39.145642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.145656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.901 #18 NEW cov: 12372 ft: 15154 corp: 17/38b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:17.901 [2024-10-05 17:55:39.185821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.185846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.901 [2024-10-05 17:55:39.185922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.185940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.901 [2024-10-05 17:55:39.185998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.186012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.901 #19 NEW cov: 12372 ft: 15181 corp: 18/41b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:17.901 [2024-10-05 17:55:39.225937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.225963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.901 [2024-10-05 17:55:39.226020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.226034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.901 [2024-10-05 17:55:39.226090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:17.901 [2024-10-05 17:55:39.226103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.159 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:18.159 #20 NEW cov: 12395 ft: 15220 corp: 19/44b lim: 5 exec/s: 20 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:06:18.159 [2024-10-05 17:55:39.536713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.536744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.159 [2024-10-05 17:55:39.536800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.536814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.159 [2024-10-05 17:55:39.536868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.536881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.159 #21 NEW cov: 12395 ft: 15258 corp: 20/47b lim: 5 exec/s: 21 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:18.159 [2024-10-05 17:55:39.576442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.576467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.159 #22 NEW cov: 12395 ft: 15307 corp: 21/48b lim: 5 exec/s: 22 rss: 75Mb L: 1/5 MS: 1 CopyPart- 00:06:18.159 [2024-10-05 17:55:39.616833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.616859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.159 [2024-10-05 17:55:39.616928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.616942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.159 [2024-10-05 17:55:39.616999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.159 [2024-10-05 17:55:39.617013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.417 #23 NEW cov: 12395 ft: 15340 corp: 22/51b lim: 5 exec/s: 23 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:18.417 [2024-10-05 17:55:39.677037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.677063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.677134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.677148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.677206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.677221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.417 #24 NEW cov: 12395 ft: 15362 corp: 23/54b lim: 5 exec/s: 24 rss: 75Mb L: 3/5 MS: 1 ChangeByte- 00:06:18.417 [2024-10-05 17:55:39.717441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.717466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.717521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.717534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.717588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.717602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.717653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.717666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.717718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.717731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.417 #25 NEW cov: 12395 ft: 15367 corp: 24/59b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:06:18.417 [2024-10-05 17:55:39.777297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.777322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.777378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.777394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.777448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.777461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.417 #26 NEW cov: 12395 ft: 15387 corp: 25/62b lim: 5 exec/s: 26 rss: 75Mb L: 3/5 MS: 1 ChangeByte- 00:06:18.417 [2024-10-05 17:55:39.837479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.837504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.837561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.417 [2024-10-05 17:55:39.837574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.417 [2024-10-05 17:55:39.837626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.418 [2024-10-05 17:55:39.837655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.418 #27 NEW cov: 12395 ft: 15392 corp: 26/65b lim: 5 exec/s: 27 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:06:18.418 [2024-10-05 17:55:39.877440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.418 [2024-10-05 17:55:39.877470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.418 [2024-10-05 17:55:39.877533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.418 [2024-10-05 17:55:39.877548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.676 #28 NEW cov: 12395 ft: 15403 corp: 27/67b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:18.676 [2024-10-05 17:55:39.917677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.917701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.917756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.917769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.917823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.917836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.676 #29 NEW cov: 12395 ft: 15413 corp: 28/70b lim: 5 exec/s: 29 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:18.676 [2024-10-05 17:55:39.957769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.957794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.957850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.957866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.957918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.957932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.676 #30 NEW cov: 12395 ft: 15422 corp: 29/73b lim: 5 exec/s: 30 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:18.676 [2024-10-05 17:55:39.997856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.997880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.997938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.997952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:39.998004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:39.998017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.676 #31 NEW cov: 12395 ft: 15446 corp: 30/76b lim: 5 exec/s: 31 rss: 75Mb L: 3/5 MS: 1 ChangeByte- 00:06:18.676 [2024-10-05 17:55:40.058139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:40.058167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:40.058239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:40.058253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:40.058308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:40.058322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.676 #32 NEW cov: 12395 ft: 15465 corp: 31/79b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:18.676 [2024-10-05 17:55:40.098101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:40.098127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.676 [2024-10-05 17:55:40.098183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.676 [2024-10-05 17:55:40.098205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.934 #33 NEW cov: 12395 ft: 15565 corp: 32/81b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:18.934 [2024-10-05 17:55:40.158427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.158453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.158508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.158522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.158577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.158591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.934 #34 NEW cov: 12395 ft: 15603 corp: 33/84b lim: 5 exec/s: 34 rss: 76Mb L: 3/5 MS: 1 CrossOver- 00:06:18.934 [2024-10-05 17:55:40.218570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.218595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.218650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.218664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.218717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.218730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.934 #35 NEW cov: 12395 ft: 15605 corp: 34/87b lim: 5 exec/s: 35 rss: 76Mb L: 3/5 MS: 1 ChangeByte- 00:06:18.934 [2024-10-05 17:55:40.258638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.258663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.258715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.258730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.258784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.258798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.318843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.318869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.318926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.318940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.934 [2024-10-05 17:55:40.318995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:18.934 [2024-10-05 17:55:40.319009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.934 #37 NEW cov: 12395 ft: 15645 corp: 35/90b lim: 5 exec/s: 18 rss: 76Mb L: 3/5 MS: 2 ChangeBit-ChangeByte- 00:06:18.934 #37 DONE cov: 12395 ft: 15645 corp: 35/90b lim: 5 exec/s: 18 rss: 76Mb 00:06:18.934 Done 37 runs in 2 second(s) 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:19.192 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:19.193 17:55:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:19.193 [2024-10-05 17:55:40.510055] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:19.193 [2024-10-05 17:55:40.510147] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481673 ] 00:06:19.450 [2024-10-05 17:55:40.694696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.450 [2024-10-05 17:55:40.762349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.450 [2024-10-05 17:55:40.821302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.450 [2024-10-05 17:55:40.837622] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:19.450 INFO: Running with entropic power schedule (0xFF, 100). 00:06:19.450 INFO: Seed: 2187933089 00:06:19.450 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:19.450 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:19.450 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:19.450 INFO: A corpus is not provided, starting from an empty corpus 00:06:19.450 #2 INITED exec/s: 0 rss: 65Mb 00:06:19.450 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:19.450 This may also happen if the target rejected all inputs we tried so far 00:06:19.450 [2024-10-05 17:55:40.903991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.450 [2024-10-05 17:55:40.904032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.450 [2024-10-05 17:55:40.904170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.450 [2024-10-05 17:55:40.904191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.965 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:19.965 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.965 #4 NEW cov: 12191 ft: 12182 corp: 2/24b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:19.965 [2024-10-05 17:55:41.235278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.235315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.235445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.235463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.235594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.235611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.235740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.235758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.965 #5 NEW cov: 12304 ft: 13210 corp: 3/61b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 CopyPart- 00:06:19.965 [2024-10-05 17:55:41.305374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c843 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.305403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.305539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:43434343 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.305556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.305691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:43c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.305709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.305844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.305862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.965 #11 NEW cov: 12310 ft: 13491 corp: 4/98b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:19.965 [2024-10-05 17:55:41.355517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.355546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.355677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.355695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.355825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.355843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.355976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.355993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:19.965 #12 NEW cov: 12395 ft: 13704 corp: 5/135b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 ShuffleBytes- 00:06:19.965 [2024-10-05 17:55:41.425423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.425450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.965 [2024-10-05 17:55:41.425574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.965 [2024-10-05 17:55:41.425591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.223 #13 NEW cov: 12395 ft: 13860 corp: 6/158b lim: 40 exec/s: 0 rss: 73Mb L: 23/37 MS: 1 ChangeBit- 00:06:20.223 [2024-10-05 17:55:41.475897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.475924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.476060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.476078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.476232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.476249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.476382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.476398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.223 #14 NEW cov: 12395 ft: 13911 corp: 7/190b lim: 40 exec/s: 0 rss: 73Mb L: 32/37 MS: 1 EraseBytes- 00:06:20.223 [2024-10-05 17:55:41.546065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.546091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.546232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.546249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.546376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.546392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.546527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.546545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.223 #15 NEW cov: 12395 ft: 14041 corp: 8/229b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:06:20.223 [2024-10-05 17:55:41.595891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.595920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.596048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.596066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.223 #16 NEW cov: 12395 ft: 14079 corp: 9/250b lim: 40 exec/s: 0 rss: 73Mb L: 21/39 MS: 1 EraseBytes- 00:06:20.223 [2024-10-05 17:55:41.666540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c843 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.666566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.666697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:43434343 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.666714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.666845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:43c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.666862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.223 [2024-10-05 17:55:41.666996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.223 [2024-10-05 17:55:41.667014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.481 #17 NEW cov: 12395 ft: 14168 corp: 10/287b lim: 40 exec/s: 0 rss: 73Mb L: 37/39 MS: 1 ShuffleBytes- 00:06:20.481 [2024-10-05 17:55:41.736349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.736375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.481 [2024-10-05 17:55:41.736524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c7c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.736541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.481 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:20.481 #18 NEW cov: 12418 ft: 14223 corp: 11/308b lim: 40 exec/s: 0 rss: 73Mb L: 21/39 MS: 1 ChangeBinInt- 00:06:20.481 [2024-10-05 17:55:41.806523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.806552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.481 [2024-10-05 17:55:41.806693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.806712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.481 #19 NEW cov: 12418 ft: 14263 corp: 12/331b lim: 40 exec/s: 0 rss: 73Mb L: 23/39 MS: 1 ShuffleBytes- 00:06:20.481 [2024-10-05 17:55:41.856888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.856917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.481 [2024-10-05 17:55:41.857058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.857076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.481 [2024-10-05 17:55:41.857213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.857231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.481 #20 NEW cov: 12418 ft: 14476 corp: 13/359b lim: 40 exec/s: 20 rss: 74Mb L: 28/39 MS: 1 EraseBytes- 00:06:20.481 [2024-10-05 17:55:41.926856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.926883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.481 [2024-10-05 17:55:41.927013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c7c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.481 [2024-10-05 17:55:41.927030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.738 #21 NEW cov: 12418 ft: 14529 corp: 14/380b lim: 40 exec/s: 21 rss: 74Mb L: 21/39 MS: 1 ShuffleBytes- 00:06:20.738 [2024-10-05 17:55:41.996875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c802c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.738 [2024-10-05 17:55:41.996907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.738 #22 NEW cov: 12418 ft: 14859 corp: 15/395b lim: 40 exec/s: 22 rss: 74Mb L: 15/39 MS: 1 CrossOver- 00:06:20.738 [2024-10-05 17:55:42.047355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.047382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.047528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.047547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.047683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c9c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.047699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.739 #23 NEW cov: 12418 ft: 14881 corp: 16/419b lim: 40 exec/s: 23 rss: 74Mb L: 24/39 MS: 1 CopyPart- 00:06:20.739 [2024-10-05 17:55:42.097807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.097836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.097977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.097994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.098132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.098149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.098290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.098309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.739 #24 NEW cov: 12418 ft: 14900 corp: 17/458b lim: 40 exec/s: 24 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:20.739 [2024-10-05 17:55:42.147473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.147501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.147653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.147671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.739 #25 NEW cov: 12418 ft: 14946 corp: 18/481b lim: 40 exec/s: 25 rss: 74Mb L: 23/39 MS: 1 ShuffleBytes- 00:06:20.739 [2024-10-05 17:55:42.197630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:d8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.197657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.739 [2024-10-05 17:55:42.197804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c7c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.739 [2024-10-05 17:55:42.197822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.996 #26 NEW cov: 12418 ft: 14959 corp: 19/502b lim: 40 exec/s: 26 rss: 74Mb L: 21/39 MS: 1 ChangeBit- 00:06:20.996 [2024-10-05 17:55:42.247815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8e8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.247842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.247987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c7c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.248009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.996 #27 NEW cov: 12418 ft: 14994 corp: 20/523b lim: 40 exec/s: 27 rss: 74Mb L: 21/39 MS: 1 ChangeBit- 00:06:20.996 [2024-10-05 17:55:42.318540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8cf cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.318567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.318697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.318716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.318843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.318861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.318995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.319012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.996 #28 NEW cov: 12418 ft: 15004 corp: 21/555b lim: 40 exec/s: 28 rss: 74Mb L: 32/39 MS: 1 ChangeBinInt- 00:06:20.996 [2024-10-05 17:55:42.368600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.368628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.368769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8cac8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.368787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.368918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.368935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.369074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.369093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.996 #29 NEW cov: 12418 ft: 15033 corp: 22/587b lim: 40 exec/s: 29 rss: 74Mb L: 32/39 MS: 1 ChangeBit- 00:06:20.996 [2024-10-05 17:55:42.408765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.408793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.408928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.408945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.409077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.409095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.996 [2024-10-05 17:55:42.409228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.409245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.996 #30 NEW cov: 12418 ft: 15105 corp: 23/619b lim: 40 exec/s: 30 rss: 74Mb L: 32/39 MS: 1 CopyPart- 00:06:20.996 [2024-10-05 17:55:42.458486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:d8d8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.996 [2024-10-05 17:55:42.458513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.253 [2024-10-05 17:55:42.458636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c7c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.458653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.253 #31 NEW cov: 12418 ft: 15132 corp: 24/640b lim: 40 exec/s: 31 rss: 74Mb L: 21/39 MS: 1 ChangeBit- 00:06:21.253 [2024-10-05 17:55:42.529047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.529073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.253 [2024-10-05 17:55:42.529191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.529209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.253 [2024-10-05 17:55:42.529332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.529348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.253 [2024-10-05 17:55:42.529475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.529491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.253 #32 NEW cov: 12418 ft: 15146 corp: 25/679b lim: 40 exec/s: 32 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:06:21.253 [2024-10-05 17:55:42.599175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c843 cdw11:4343b643 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.253 [2024-10-05 17:55:42.599206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.599331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:43434343 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.599347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.599482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:43c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.599499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.599631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.599650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.254 #33 NEW cov: 12418 ft: 15157 corp: 26/716b lim: 40 exec/s: 33 rss: 74Mb L: 37/39 MS: 1 ChangeBinInt- 00:06:21.254 [2024-10-05 17:55:42.649585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c843 cdw11:4343b643 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.649610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.649751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:43434343 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.649769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.649900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:43c8c8c8 cdw11:c843c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.649918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.650052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.650068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.650197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.650216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.254 #34 NEW cov: 12418 ft: 15204 corp: 27/756b lim: 40 exec/s: 34 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:06:21.254 [2024-10-05 17:55:42.709649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c83fc8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.709675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.709807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.709824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.709956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.709971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.254 [2024-10-05 17:55:42.710097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.254 [2024-10-05 17:55:42.710115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.511 #35 NEW cov: 12418 ft: 15254 corp: 28/788b lim: 40 exec/s: 35 rss: 74Mb L: 32/40 MS: 1 ChangeByte- 00:06:21.511 [2024-10-05 17:55:42.759391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.511 [2024-10-05 17:55:42.759418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.511 [2024-10-05 17:55:42.759546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b2b2b2b2 cdw11:b2c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.511 [2024-10-05 17:55:42.759563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.511 [2024-10-05 17:55:42.759692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c9c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.511 [2024-10-05 17:55:42.759710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.511 #36 NEW cov: 12418 ft: 15374 corp: 29/814b lim: 40 exec/s: 36 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:06:21.511 [2024-10-05 17:55:42.799344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c8bc cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.511 [2024-10-05 17:55:42.799371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.511 [2024-10-05 17:55:42.799501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.511 [2024-10-05 17:55:42.799519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.511 #37 NEW cov: 12418 ft: 15420 corp: 30/835b lim: 40 exec/s: 37 rss: 74Mb L: 21/40 MS: 1 ChangeByte- 00:06:21.512 [2024-10-05 17:55:42.849873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02c8c843 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.512 [2024-10-05 17:55:42.849900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.512 [2024-10-05 17:55:42.850031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:43434343 cdw11:43434343 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.512 [2024-10-05 17:55:42.850046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.512 [2024-10-05 17:55:42.850177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:437ec8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.512 [2024-10-05 17:55:42.850197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.512 [2024-10-05 17:55:42.850329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:c8c8c8c8 cdw11:c8c8c8c8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:21.512 [2024-10-05 17:55:42.850346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.512 #38 NEW cov: 12418 ft: 15474 corp: 31/872b lim: 40 exec/s: 19 rss: 74Mb L: 37/40 MS: 1 ChangeByte- 00:06:21.512 #38 DONE cov: 12418 ft: 15474 corp: 31/872b lim: 40 exec/s: 19 rss: 74Mb 00:06:21.512 Done 38 runs in 2 second(s) 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:21.769 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:21.770 17:55:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:21.770 [2024-10-05 17:55:43.062736] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:21.770 [2024-10-05 17:55:43.062821] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482110 ] 00:06:22.027 [2024-10-05 17:55:43.247682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.027 [2024-10-05 17:55:43.316589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.027 [2024-10-05 17:55:43.375927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.027 [2024-10-05 17:55:43.392243] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:22.027 INFO: Running with entropic power schedule (0xFF, 100). 00:06:22.027 INFO: Seed: 447984873 00:06:22.027 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:22.027 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:22.027 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:22.027 INFO: A corpus is not provided, starting from an empty corpus 00:06:22.027 #2 INITED exec/s: 0 rss: 66Mb 00:06:22.027 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:22.027 This may also happen if the target rejected all inputs we tried so far 00:06:22.027 [2024-10-05 17:55:43.458502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.027 [2024-10-05 17:55:43.458538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.542 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:22.543 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:22.543 #7 NEW cov: 12200 ft: 12197 corp: 2/14b lim: 40 exec/s: 0 rss: 73Mb L: 13/13 MS: 5 InsertByte-CopyPart-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:22.543 [2024-10-05 17:55:43.799735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.543 [2024-10-05 17:55:43.799780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.543 [2024-10-05 17:55:43.799926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.543 [2024-10-05 17:55:43.799945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.543 #8 NEW cov: 12316 ft: 13585 corp: 3/30b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:06:22.543 [2024-10-05 17:55:43.859452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.543 [2024-10-05 17:55:43.859479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.543 #9 NEW cov: 12322 ft: 13900 corp: 4/44b lim: 40 exec/s: 0 rss: 73Mb L: 14/16 MS: 1 InsertByte- 00:06:22.543 [2024-10-05 17:55:43.909599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.543 [2024-10-05 17:55:43.909627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.543 #10 NEW cov: 12407 ft: 14092 corp: 5/59b lim: 40 exec/s: 0 rss: 73Mb L: 15/16 MS: 1 InsertByte- 00:06:22.543 [2024-10-05 17:55:43.979800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b02ef2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.543 [2024-10-05 17:55:43.979828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.800 #16 NEW cov: 12407 ft: 14187 corp: 6/74b lim: 40 exec/s: 0 rss: 74Mb L: 15/16 MS: 1 ChangeByte- 00:06:22.800 [2024-10-05 17:55:44.050190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.800 [2024-10-05 17:55:44.050217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.800 #17 NEW cov: 12407 ft: 14230 corp: 7/89b lim: 40 exec/s: 0 rss: 74Mb L: 15/16 MS: 1 CopyPart- 00:06:22.800 [2024-10-05 17:55:44.090389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.800 [2024-10-05 17:55:44.090417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.800 [2024-10-05 17:55:44.090567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.800 [2024-10-05 17:55:44.090584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.800 #18 NEW cov: 12407 ft: 14302 corp: 8/108b lim: 40 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CopyPart- 00:06:22.800 [2024-10-05 17:55:44.140597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04fffff2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.800 [2024-10-05 17:55:44.140622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.800 [2024-10-05 17:55:44.140756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.800 [2024-10-05 17:55:44.140771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.800 #19 NEW cov: 12407 ft: 14349 corp: 9/124b lim: 40 exec/s: 0 rss: 74Mb L: 16/19 MS: 1 CopyPart- 00:06:22.800 [2024-10-05 17:55:44.210458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a04b025 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.801 [2024-10-05 17:55:44.210485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.801 #21 NEW cov: 12407 ft: 14383 corp: 10/137b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 2 InsertByte-CrossOver- 00:06:22.801 [2024-10-05 17:55:44.260655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.801 [2024-10-05 17:55:44.260680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.059 #22 NEW cov: 12407 ft: 14441 corp: 11/150b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 ChangeBit- 00:06:23.059 [2024-10-05 17:55:44.311633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.311659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.059 [2024-10-05 17:55:44.311780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.311812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.059 [2024-10-05 17:55:44.311942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.311958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.059 [2024-10-05 17:55:44.312088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff02 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.312105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.059 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:23.059 #26 NEW cov: 12430 ft: 14814 corp: 12/182b lim: 40 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 CopyPart-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:23.059 [2024-10-05 17:55:44.361236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04fffff2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.361264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.059 [2024-10-05 17:55:44.361402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.361419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.059 #27 NEW cov: 12430 ft: 14870 corp: 13/198b lim: 40 exec/s: 0 rss: 74Mb L: 16/32 MS: 1 ShuffleBytes- 00:06:23.059 [2024-10-05 17:55:44.431434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f272 cdw11:f2f2ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.431460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.059 [2024-10-05 17:55:44.431584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.431600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.059 #28 NEW cov: 12430 ft: 14910 corp: 14/214b lim: 40 exec/s: 28 rss: 74Mb L: 16/32 MS: 1 ChangeBit- 00:06:23.059 [2024-10-05 17:55:44.481324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.059 [2024-10-05 17:55:44.481352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.059 #29 NEW cov: 12430 ft: 14961 corp: 15/223b lim: 40 exec/s: 29 rss: 74Mb L: 9/32 MS: 1 EraseBytes- 00:06:23.317 [2024-10-05 17:55:44.551821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f2ff0472 cdw11:f2b0f2ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.551848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.317 [2024-10-05 17:55:44.551985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff2f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.552002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.317 #30 NEW cov: 12430 ft: 15003 corp: 16/239b lim: 40 exec/s: 30 rss: 74Mb L: 16/32 MS: 1 ShuffleBytes- 00:06:23.317 [2024-10-05 17:55:44.621786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04f2f204 cdw11:f226faf2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.621813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.317 #33 NEW cov: 12430 ft: 15043 corp: 17/247b lim: 40 exec/s: 33 rss: 74Mb L: 8/32 MS: 3 EraseBytes-CopyPart-InsertByte- 00:06:23.317 [2024-10-05 17:55:44.692226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f2ff0472 cdw11:f2b0f2ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.692252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.317 [2024-10-05 17:55:44.692405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fff2f2f2 cdw11:f221f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.692423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.317 #34 NEW cov: 12430 ft: 15051 corp: 18/263b lim: 40 exec/s: 34 rss: 74Mb L: 16/32 MS: 1 ChangeByte- 00:06:23.317 [2024-10-05 17:55:44.762025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.317 [2024-10-05 17:55:44.762053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.574 #35 NEW cov: 12430 ft: 15162 corp: 19/276b lim: 40 exec/s: 35 rss: 74Mb L: 13/32 MS: 1 ShuffleBytes- 00:06:23.574 [2024-10-05 17:55:44.812371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0ffff cdw11:fffffff2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.574 [2024-10-05 17:55:44.812399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.574 #38 NEW cov: 12430 ft: 15189 corp: 20/287b lim: 40 exec/s: 38 rss: 74Mb L: 11/32 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:23.574 [2024-10-05 17:55:44.862457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04f2f2f2 cdw11:0426faf2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.574 [2024-10-05 17:55:44.862486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.574 #39 NEW cov: 12430 ft: 15228 corp: 21/295b lim: 40 exec/s: 39 rss: 74Mb L: 8/32 MS: 1 ShuffleBytes- 00:06:23.574 [2024-10-05 17:55:44.932717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0ffff cdw11:fffff7f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.574 [2024-10-05 17:55:44.932744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.574 #40 NEW cov: 12430 ft: 15274 corp: 22/306b lim: 40 exec/s: 40 rss: 75Mb L: 11/32 MS: 1 ChangeByte- 00:06:23.574 [2024-10-05 17:55:45.003149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff0019 cdw11:04b0f272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.574 [2024-10-05 17:55:45.003177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.574 [2024-10-05 17:55:45.003314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f2f2ffff cdw11:fff2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.574 [2024-10-05 17:55:45.003332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.574 #41 NEW cov: 12430 ft: 15281 corp: 23/326b lim: 40 exec/s: 41 rss: 75Mb L: 20/32 MS: 1 CMP- DE: "\377\377\000\031"- 00:06:23.832 [2024-10-05 17:55:45.053298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff0004 cdw11:b0f272f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.832 [2024-10-05 17:55:45.053326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.832 [2024-10-05 17:55:45.053452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f2ffffff cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.832 [2024-10-05 17:55:45.053467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.832 #42 NEW cov: 12430 ft: 15299 corp: 24/345b lim: 40 exec/s: 42 rss: 75Mb L: 19/32 MS: 1 EraseBytes- 00:06:23.832 [2024-10-05 17:55:45.123441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2fa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.832 [2024-10-05 17:55:45.123469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.832 #43 NEW cov: 12430 ft: 15314 corp: 25/358b lim: 40 exec/s: 43 rss: 75Mb L: 13/32 MS: 1 ChangeByte- 00:06:23.832 [2024-10-05 17:55:45.173443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04f226f2 cdw11:f204faf2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.832 [2024-10-05 17:55:45.173471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.832 #44 NEW cov: 12430 ft: 15316 corp: 26/366b lim: 40 exec/s: 44 rss: 75Mb L: 8/32 MS: 1 ShuffleBytes- 00:06:23.832 [2024-10-05 17:55:45.243685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a04f2f2 cdw11:f2fad9f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.832 [2024-10-05 17:55:45.243713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.832 #45 NEW cov: 12430 ft: 15348 corp: 27/379b lim: 40 exec/s: 45 rss: 75Mb L: 13/32 MS: 1 CrossOver- 00:06:24.089 [2024-10-05 17:55:45.313827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b0f2f2 cdw11:f2f2f2f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.313855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.089 #46 NEW cov: 12430 ft: 15367 corp: 28/394b lim: 40 exec/s: 46 rss: 75Mb L: 15/32 MS: 1 ChangeBit- 00:06:24.089 [2024-10-05 17:55:45.365080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.365105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.365235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.365251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.365377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.365395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.365519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.365535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.365660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00f2f204 cdw11:f226faf2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.365676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.089 #47 NEW cov: 12430 ft: 15447 corp: 29/434b lim: 40 exec/s: 47 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:24.089 [2024-10-05 17:55:45.414991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:04b02ef2 cdw11:f2dbdbdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.415019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.415143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbdb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.415160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.415291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dbdbdbdb cdw11:dbdbdbf2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.415309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.089 [2024-10-05 17:55:45.415448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:f2f2f2f2 cdw11:f24ef22a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.089 [2024-10-05 17:55:45.415463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.089 #48 NEW cov: 12430 ft: 15489 corp: 30/467b lim: 40 exec/s: 24 rss: 75Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:06:24.089 #48 DONE cov: 12430 ft: 15489 corp: 30/467b lim: 40 exec/s: 24 rss: 75Mb 00:06:24.089 ###### Recommended dictionary. ###### 00:06:24.089 "\377\377\000\031" # Uses: 0 00:06:24.089 ###### End of recommended dictionary. ###### 00:06:24.089 Done 48 runs in 2 second(s) 00:06:24.346 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:24.347 17:55:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:24.347 [2024-10-05 17:55:45.627811] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:24.347 [2024-10-05 17:55:45.627882] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482645 ] 00:06:24.347 [2024-10-05 17:55:45.805603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.605 [2024-10-05 17:55:45.872132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.605 [2024-10-05 17:55:45.930750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.605 [2024-10-05 17:55:45.947062] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:24.605 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.605 INFO: Seed: 3002967067 00:06:24.605 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:24.605 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:24.605 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:24.605 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.605 #2 INITED exec/s: 0 rss: 65Mb 00:06:24.605 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:24.605 This may also happen if the target rejected all inputs we tried so far 00:06:24.605 [2024-10-05 17:55:45.992631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.605 [2024-10-05 17:55:45.992658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.605 [2024-10-05 17:55:45.992717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.605 [2024-10-05 17:55:45.992730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.908 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:24.908 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.908 #5 NEW cov: 12201 ft: 12197 corp: 2/18b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:06:24.908 [2024-10-05 17:55:46.313492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.908 [2024-10-05 17:55:46.313524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.908 [2024-10-05 17:55:46.313583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.908 [2024-10-05 17:55:46.313600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.908 #6 NEW cov: 12314 ft: 12747 corp: 3/36b lim: 40 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 CrossOver- 00:06:25.207 [2024-10-05 17:55:46.373780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.373810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.373870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.373884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.373942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.373956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.207 #7 NEW cov: 12320 ft: 13175 corp: 4/65b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:25.207 [2024-10-05 17:55:46.413678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.413706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.413765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.413780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 #8 NEW cov: 12405 ft: 13548 corp: 5/82b lim: 40 exec/s: 0 rss: 73Mb L: 17/29 MS: 1 ChangeBinInt- 00:06:25.207 [2024-10-05 17:55:46.453636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:3000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.453663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 #10 NEW cov: 12405 ft: 14484 corp: 6/90b lim: 40 exec/s: 0 rss: 73Mb L: 8/29 MS: 2 InsertRepeatedBytes-InsertByte- 00:06:25.207 [2024-10-05 17:55:46.494253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.494278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.494338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.494353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.494412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.494426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.494482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.494496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.207 #11 NEW cov: 12405 ft: 14871 corp: 7/123b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:06:25.207 [2024-10-05 17:55:46.554085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.554110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.554169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.554183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 #12 NEW cov: 12405 ft: 14933 corp: 8/141b lim: 40 exec/s: 0 rss: 73Mb L: 18/33 MS: 1 ShuffleBytes- 00:06:25.207 [2024-10-05 17:55:46.614447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.614473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.614532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.614546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.614604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.614618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.207 #13 NEW cov: 12405 ft: 14957 corp: 9/170b lim: 40 exec/s: 0 rss: 73Mb L: 29/33 MS: 1 CopyPart- 00:06:25.207 [2024-10-05 17:55:46.654551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:c5ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.654578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.654638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.654653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.207 [2024-10-05 17:55:46.654710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.207 [2024-10-05 17:55:46.654724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.466 #14 NEW cov: 12405 ft: 15070 corp: 10/199b lim: 40 exec/s: 0 rss: 73Mb L: 29/33 MS: 1 ChangeByte- 00:06:25.466 [2024-10-05 17:55:46.714873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.714898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.714956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.714970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.715027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.715040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.715100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.715115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.466 #15 NEW cov: 12405 ft: 15145 corp: 11/238b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:25.466 [2024-10-05 17:55:46.754668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.754694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.754754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.754769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.466 #16 NEW cov: 12405 ft: 15162 corp: 12/260b lim: 40 exec/s: 0 rss: 73Mb L: 22/39 MS: 1 EraseBytes- 00:06:25.466 [2024-10-05 17:55:46.794941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.794966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.795028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.795042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.795099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.795113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.466 #17 NEW cov: 12405 ft: 15192 corp: 13/290b lim: 40 exec/s: 0 rss: 74Mb L: 30/39 MS: 1 EraseBytes- 00:06:25.466 [2024-10-05 17:55:46.855263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.855289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.855349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.855364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.855422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.855436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.855493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.855507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.466 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:25.466 #18 NEW cov: 12428 ft: 15241 corp: 14/324b lim: 40 exec/s: 0 rss: 74Mb L: 34/39 MS: 1 InsertByte- 00:06:25.466 [2024-10-05 17:55:46.915068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000c5 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.915098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.466 [2024-10-05 17:55:46.915156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.466 [2024-10-05 17:55:46.915171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.724 #19 NEW cov: 12428 ft: 15258 corp: 15/342b lim: 40 exec/s: 0 rss: 74Mb L: 18/39 MS: 1 ChangeByte- 00:06:25.724 [2024-10-05 17:55:46.955202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:46.955228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.724 [2024-10-05 17:55:46.955286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:46.955301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.724 #20 NEW cov: 12428 ft: 15284 corp: 16/359b lim: 40 exec/s: 20 rss: 74Mb L: 17/39 MS: 1 ShuffleBytes- 00:06:25.724 [2024-10-05 17:55:47.015409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.015434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.724 [2024-10-05 17:55:47.015495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.015510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.724 #21 NEW cov: 12428 ft: 15324 corp: 17/381b lim: 40 exec/s: 21 rss: 74Mb L: 22/39 MS: 1 ShuffleBytes- 00:06:25.724 [2024-10-05 17:55:47.075770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ff00ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.075796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.724 [2024-10-05 17:55:47.075854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.075868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.724 [2024-10-05 17:55:47.075943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.075958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.724 #22 NEW cov: 12428 ft: 15328 corp: 18/410b lim: 40 exec/s: 22 rss: 74Mb L: 29/39 MS: 1 ShuffleBytes- 00:06:25.724 [2024-10-05 17:55:47.115698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000c5 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.724 [2024-10-05 17:55:47.115724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.725 [2024-10-05 17:55:47.115783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.725 [2024-10-05 17:55:47.115798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.725 #23 NEW cov: 12428 ft: 15398 corp: 19/428b lim: 40 exec/s: 23 rss: 74Mb L: 18/39 MS: 1 ChangeBit- 00:06:25.725 [2024-10-05 17:55:47.176231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.725 [2024-10-05 17:55:47.176257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.725 [2024-10-05 17:55:47.176316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff01ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.725 [2024-10-05 17:55:47.176330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.725 [2024-10-05 17:55:47.176388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.725 [2024-10-05 17:55:47.176403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.725 [2024-10-05 17:55:47.176458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.725 [2024-10-05 17:55:47.176472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.983 #24 NEW cov: 12428 ft: 15426 corp: 20/462b lim: 40 exec/s: 24 rss: 74Mb L: 34/39 MS: 1 ChangeBinInt- 00:06:25.983 [2024-10-05 17:55:47.236228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ff00ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.236254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.236315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00260000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.236328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.236385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.236399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.983 #25 NEW cov: 12428 ft: 15433 corp: 21/491b lim: 40 exec/s: 25 rss: 74Mb L: 29/39 MS: 1 ChangeByte- 00:06:25.983 [2024-10-05 17:55:47.296403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.296428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.296489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.296503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.296561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.296575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.983 #26 NEW cov: 12428 ft: 15453 corp: 22/520b lim: 40 exec/s: 26 rss: 74Mb L: 29/39 MS: 1 ShuffleBytes- 00:06:25.983 [2024-10-05 17:55:47.336726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.336758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.336816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff01ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.336829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.336887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.336901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.336957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.336971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.983 #27 NEW cov: 12428 ft: 15476 corp: 23/554b lim: 40 exec/s: 27 rss: 74Mb L: 34/39 MS: 1 ChangeBit- 00:06:25.983 [2024-10-05 17:55:47.396699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ff00ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.396725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.396786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00260000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.396801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.983 [2024-10-05 17:55:47.396858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.983 [2024-10-05 17:55:47.396873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.983 #28 NEW cov: 12428 ft: 15484 corp: 24/583b lim: 40 exec/s: 28 rss: 74Mb L: 29/39 MS: 1 ChangeBit- 00:06:26.241 [2024-10-05 17:55:47.456685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000000c5 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.456712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.456772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.456786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.241 #29 NEW cov: 12428 ft: 15506 corp: 25/601b lim: 40 exec/s: 29 rss: 75Mb L: 18/39 MS: 1 ChangeBit- 00:06:26.241 [2024-10-05 17:55:47.516824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.516851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.516909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00001100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.516923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.241 #30 NEW cov: 12428 ft: 15577 corp: 26/618b lim: 40 exec/s: 30 rss: 75Mb L: 17/39 MS: 1 ChangeBit- 00:06:26.241 [2024-10-05 17:55:47.577371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02ff00ff cdw11:ff0000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.577396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.577454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff01ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.577468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.577527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.577542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.577599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.577612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.241 #31 NEW cov: 12428 ft: 15591 corp: 27/652b lim: 40 exec/s: 31 rss: 75Mb L: 34/39 MS: 1 ShuffleBytes- 00:06:26.241 [2024-10-05 17:55:47.617464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffff2d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.617489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.617549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff01ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.617563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.617620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff002000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.617634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.617692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.617706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.241 #32 NEW cov: 12428 ft: 15612 corp: 28/687b lim: 40 exec/s: 32 rss: 75Mb L: 35/39 MS: 1 InsertByte- 00:06:26.241 [2024-10-05 17:55:47.677325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.677360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.241 [2024-10-05 17:55:47.677415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.241 [2024-10-05 17:55:47.677429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.241 #33 NEW cov: 12428 ft: 15636 corp: 29/705b lim: 40 exec/s: 33 rss: 75Mb L: 18/39 MS: 1 ChangeByte- 00:06:26.499 [2024-10-05 17:55:47.717774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.499 [2024-10-05 17:55:47.717800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.717862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.717877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.717950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.717965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.718022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.718036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.500 #34 NEW cov: 12428 ft: 15683 corp: 30/740b lim: 40 exec/s: 34 rss: 75Mb L: 35/39 MS: 1 InsertByte- 00:06:26.500 [2024-10-05 17:55:47.757699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:c5ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.757724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.757783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.757797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.757854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.757868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.500 #35 NEW cov: 12428 ft: 15708 corp: 31/769b lim: 40 exec/s: 35 rss: 75Mb L: 29/39 MS: 1 ShuffleBytes- 00:06:26.500 [2024-10-05 17:55:47.817984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.818009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.818070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.818084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.818141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.818155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.818216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.818230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.500 #36 NEW cov: 12428 ft: 15717 corp: 32/808b lim: 40 exec/s: 36 rss: 75Mb L: 39/39 MS: 1 ChangeBit- 00:06:26.500 [2024-10-05 17:55:47.858134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:02ff00ff cdw11:ff0000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.858159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.858224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff01ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.858239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.858299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.858313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.858369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.858383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.500 #37 NEW cov: 12428 ft: 15738 corp: 33/842b lim: 40 exec/s: 37 rss: 75Mb L: 34/39 MS: 1 ChangeBinInt- 00:06:26.500 [2024-10-05 17:55:47.918280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0200ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.918306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.918366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.918380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.918440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.918454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.500 [2024-10-05 17:55:47.918513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.500 [2024-10-05 17:55:47.918526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.500 #38 NEW cov: 12428 ft: 15749 corp: 34/878b lim: 40 exec/s: 38 rss: 75Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:06:26.758 [2024-10-05 17:55:47.978263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:020000ff cdw11:ffff68f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.758 [2024-10-05 17:55:47.978290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.758 [2024-10-05 17:55:47.978349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e08d56a8 cdw11:7e260000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.758 [2024-10-05 17:55:47.978364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.758 [2024-10-05 17:55:47.978441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.758 [2024-10-05 17:55:47.978455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.758 #39 NEW cov: 12428 ft: 15763 corp: 35/907b lim: 40 exec/s: 19 rss: 75Mb L: 29/39 MS: 1 CMP- DE: "\377h\361\340\215V\250~"- 00:06:26.758 #39 DONE cov: 12428 ft: 15763 corp: 35/907b lim: 40 exec/s: 19 rss: 75Mb 00:06:26.758 ###### Recommended dictionary. ###### 00:06:26.758 "\377h\361\340\215V\250~" # Uses: 0 00:06:26.758 ###### End of recommended dictionary. ###### 00:06:26.758 Done 39 runs in 2 second(s) 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.758 17:55:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:26.758 [2024-10-05 17:55:48.192554] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:26.758 [2024-10-05 17:55:48.192625] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483069 ] 00:06:27.017 [2024-10-05 17:55:48.376121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.017 [2024-10-05 17:55:48.444687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.274 [2024-10-05 17:55:48.503339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.274 [2024-10-05 17:55:48.519692] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:27.274 INFO: Running with entropic power schedule (0xFF, 100). 00:06:27.274 INFO: Seed: 1278998971 00:06:27.274 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:27.274 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:27.274 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:27.274 INFO: A corpus is not provided, starting from an empty corpus 00:06:27.274 #2 INITED exec/s: 0 rss: 65Mb 00:06:27.274 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:27.274 This may also happen if the target rejected all inputs we tried so far 00:06:27.274 [2024-10-05 17:55:48.578804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f646464 cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.274 [2024-10-05 17:55:48.578832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.531 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:27.531 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:27.531 #6 NEW cov: 12189 ft: 12182 corp: 2/12b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 InsertByte-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:06:27.532 [2024-10-05 17:55:48.909700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.909732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.909788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.909803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.532 #12 NEW cov: 12302 ft: 12960 corp: 3/34b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:27.532 [2024-10-05 17:55:48.949987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.950012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.950068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.950082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.950136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.950150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.950209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909027 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.950223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.532 #14 NEW cov: 12308 ft: 13696 corp: 4/66b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:27.532 [2024-10-05 17:55:48.990139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.990165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.990227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.990242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.990297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.990311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.532 [2024-10-05 17:55:48.990366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.532 [2024-10-05 17:55:48.990380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.790 #15 NEW cov: 12393 ft: 14009 corp: 5/99b lim: 40 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:06:27.790 [2024-10-05 17:55:49.049979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3f6464 cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.050005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.790 #17 NEW cov: 12393 ft: 14179 corp: 6/110b lim: 40 exec/s: 0 rss: 73Mb L: 11/33 MS: 2 CopyPart-CrossOver- 00:06:27.790 [2024-10-05 17:55:49.090152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.090178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.090239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.090253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.790 #18 NEW cov: 12393 ft: 14278 corp: 7/132b lim: 40 exec/s: 0 rss: 73Mb L: 22/33 MS: 1 ChangeBinInt- 00:06:27.790 [2024-10-05 17:55:49.150598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.150623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.150679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.150692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.150746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.150760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.150812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:90900000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.150825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.790 #19 NEW cov: 12393 ft: 14316 corp: 8/170b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:27.790 [2024-10-05 17:55:49.190440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.190465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.190520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.190534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.790 #20 NEW cov: 12393 ft: 14423 corp: 9/190b lim: 40 exec/s: 0 rss: 73Mb L: 20/38 MS: 1 EraseBytes- 00:06:27.790 [2024-10-05 17:55:49.230652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.230677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.230731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.230749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.790 [2024-10-05 17:55:49.230802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.790 [2024-10-05 17:55:49.230816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.048 #21 NEW cov: 12393 ft: 14645 corp: 10/221b lim: 40 exec/s: 0 rss: 73Mb L: 31/38 MS: 1 EraseBytes- 00:06:28.048 [2024-10-05 17:55:49.290603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3f6464 cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.290628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.048 #22 NEW cov: 12393 ft: 14685 corp: 11/232b lim: 40 exec/s: 0 rss: 73Mb L: 11/38 MS: 1 ChangeByte- 00:06:28.048 [2024-10-05 17:55:49.350894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.350919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.350976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.350990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.048 #23 NEW cov: 12393 ft: 14746 corp: 12/252b lim: 40 exec/s: 0 rss: 74Mb L: 20/38 MS: 1 CopyPart- 00:06:28.048 [2024-10-05 17:55:49.411434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff72 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.411460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.411516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:72727272 cdw11:72727272 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.411530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.411585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:72727272 cdw11:72727272 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.411598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.411651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:72ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.411664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.411718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.411731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.048 #24 NEW cov: 12393 ft: 14805 corp: 13/292b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:28.048 [2024-10-05 17:55:49.451143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.451170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.451231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.451245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.048 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:28.048 #25 NEW cov: 12416 ft: 14851 corp: 14/312b lim: 40 exec/s: 0 rss: 74Mb L: 20/40 MS: 1 ChangeBit- 00:06:28.048 [2024-10-05 17:55:49.491267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffffff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.491294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.048 [2024-10-05 17:55:49.491351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.048 [2024-10-05 17:55:49.491366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.306 #26 NEW cov: 12416 ft: 14873 corp: 15/334b lim: 40 exec/s: 0 rss: 74Mb L: 22/40 MS: 1 ChangeBit- 00:06:28.306 [2024-10-05 17:55:49.551329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3f6464 cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.551358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.306 #32 NEW cov: 12416 ft: 14890 corp: 16/345b lim: 40 exec/s: 32 rss: 74Mb L: 11/40 MS: 1 CopyPart- 00:06:28.306 [2024-10-05 17:55:49.591521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.591547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.306 [2024-10-05 17:55:49.591603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.591616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.306 #34 NEW cov: 12416 ft: 14929 corp: 17/361b lim: 40 exec/s: 34 rss: 74Mb L: 16/40 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:28.306 [2024-10-05 17:55:49.651816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.651842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.306 [2024-10-05 17:55:49.651899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.651913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.306 [2024-10-05 17:55:49.651968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.651981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.306 #35 NEW cov: 12416 ft: 14963 corp: 18/392b lim: 40 exec/s: 35 rss: 74Mb L: 31/40 MS: 1 ChangeBinInt- 00:06:28.306 [2024-10-05 17:55:49.711736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a64643f cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.306 [2024-10-05 17:55:49.711766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.306 #36 NEW cov: 12416 ft: 14974 corp: 19/405b lim: 40 exec/s: 36 rss: 74Mb L: 13/40 MS: 1 CopyPart- 00:06:28.564 [2024-10-05 17:55:49.771955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.771983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.564 #37 NEW cov: 12416 ft: 15052 corp: 20/418b lim: 40 exec/s: 37 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:06:28.564 [2024-10-05 17:55:49.832066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3f0764 cdw11:64646464 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.832093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.564 #38 NEW cov: 12416 ft: 15066 corp: 21/430b lim: 40 exec/s: 38 rss: 74Mb L: 12/40 MS: 1 InsertByte- 00:06:28.564 [2024-10-05 17:55:49.872312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.872338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.564 [2024-10-05 17:55:49.872395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.872409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.564 #39 NEW cov: 12416 ft: 15116 corp: 22/450b lim: 40 exec/s: 39 rss: 74Mb L: 20/40 MS: 1 CrossOver- 00:06:28.564 [2024-10-05 17:55:49.912332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3f640b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.912358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.564 #40 NEW cov: 12416 ft: 15124 corp: 23/461b lim: 40 exec/s: 40 rss: 74Mb L: 11/40 MS: 1 ChangeBinInt- 00:06:28.564 [2024-10-05 17:55:49.972585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.972611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.564 [2024-10-05 17:55:49.972667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2cffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.564 [2024-10-05 17:55:49.972681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.564 #41 NEW cov: 12416 ft: 15140 corp: 24/481b lim: 40 exec/s: 41 rss: 74Mb L: 20/40 MS: 1 ChangeByte- 00:06:28.822 [2024-10-05 17:55:50.033015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.033042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.822 [2024-10-05 17:55:50.033099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.033113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.822 [2024-10-05 17:55:50.033168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.033185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.822 [2024-10-05 17:55:50.033257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.033271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.822 #42 NEW cov: 12416 ft: 15152 corp: 25/517b lim: 40 exec/s: 42 rss: 74Mb L: 36/40 MS: 1 CopyPart- 00:06:28.822 [2024-10-05 17:55:50.072908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.072935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.822 [2024-10-05 17:55:50.072989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2cffff2c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.822 [2024-10-05 17:55:50.073003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.822 #43 NEW cov: 12416 ft: 15174 corp: 26/537b lim: 40 exec/s: 43 rss: 75Mb L: 20/40 MS: 1 ChangeByte- 00:06:28.822 [2024-10-05 17:55:50.133062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.133088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.823 [2024-10-05 17:55:50.133146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000efff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.133160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.823 #44 NEW cov: 12416 ft: 15190 corp: 27/553b lim: 40 exec/s: 44 rss: 75Mb L: 16/40 MS: 1 ChangeBinInt- 00:06:28.823 [2024-10-05 17:55:50.193080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.193105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.823 #47 NEW cov: 12416 ft: 15194 corp: 28/565b lim: 40 exec/s: 47 rss: 75Mb L: 12/40 MS: 3 CrossOver-ChangeByte-CrossOver- 00:06:28.823 [2024-10-05 17:55:50.233308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.233333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.823 [2024-10-05 17:55:50.233392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffcbffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.233406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.823 #48 NEW cov: 12416 ft: 15224 corp: 29/586b lim: 40 exec/s: 48 rss: 75Mb L: 21/40 MS: 1 InsertByte- 00:06:28.823 [2024-10-05 17:55:50.273297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.823 [2024-10-05 17:55:50.273322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 #53 NEW cov: 12416 ft: 15236 corp: 30/596b lim: 40 exec/s: 53 rss: 75Mb L: 10/40 MS: 5 EraseBytes-ShuffleBytes-CopyPart-ShuffleBytes-CMP- DE: "\377\377\377\002"- 00:06:29.081 [2024-10-05 17:55:50.313546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff02 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.313574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 [2024-10-05 17:55:50.313632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.313646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.081 #54 NEW cov: 12416 ft: 15246 corp: 31/616b lim: 40 exec/s: 54 rss: 75Mb L: 20/40 MS: 1 PersAutoDict- DE: "\377\377\377\002"- 00:06:29.081 [2024-10-05 17:55:50.353713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.353739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 [2024-10-05 17:55:50.353795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.353809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.081 [2024-10-05 17:55:50.353863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.353877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.081 #55 NEW cov: 12416 ft: 15250 corp: 32/647b lim: 40 exec/s: 55 rss: 75Mb L: 31/40 MS: 1 CrossOver- 00:06:29.081 [2024-10-05 17:55:50.393789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffff02 cdw11:ffffff05 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.393814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 [2024-10-05 17:55:50.393874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.393888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.081 #56 NEW cov: 12416 ft: 15261 corp: 33/667b lim: 40 exec/s: 56 rss: 75Mb L: 20/40 MS: 1 ChangeBinInt- 00:06:29.081 [2024-10-05 17:55:50.453857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.453882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 #57 NEW cov: 12416 ft: 15263 corp: 34/679b lim: 40 exec/s: 57 rss: 75Mb L: 12/40 MS: 1 ChangeBit- 00:06:29.081 [2024-10-05 17:55:50.514079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.514104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.081 [2024-10-05 17:55:50.514159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:feffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.081 [2024-10-05 17:55:50.514173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.081 #58 NEW cov: 12416 ft: 15268 corp: 35/699b lim: 40 exec/s: 58 rss: 75Mb L: 20/40 MS: 1 ChangeBit- 00:06:29.340 [2024-10-05 17:55:50.554442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.341 [2024-10-05 17:55:50.554471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.341 [2024-10-05 17:55:50.554527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.341 [2024-10-05 17:55:50.554541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.341 [2024-10-05 17:55:50.554597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:903a9090 cdw11:90909090 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.341 [2024-10-05 17:55:50.554610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.341 [2024-10-05 17:55:50.554663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:90900000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.341 [2024-10-05 17:55:50.554676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.341 #59 NEW cov: 12416 ft: 15274 corp: 36/737b lim: 40 exec/s: 29 rss: 75Mb L: 38/40 MS: 1 ChangeByte- 00:06:29.341 #59 DONE cov: 12416 ft: 15274 corp: 36/737b lim: 40 exec/s: 29 rss: 75Mb 00:06:29.341 ###### Recommended dictionary. ###### 00:06:29.341 "\377\377\377\002" # Uses: 1 00:06:29.341 ###### End of recommended dictionary. ###### 00:06:29.341 Done 59 runs in 2 second(s) 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.341 17:55:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:29.341 [2024-10-05 17:55:50.764986] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:29.341 [2024-10-05 17:55:50.765060] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483481 ] 00:06:29.599 [2024-10-05 17:55:50.944742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.599 [2024-10-05 17:55:51.011400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.857 [2024-10-05 17:55:51.070685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.857 [2024-10-05 17:55:51.087041] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:29.857 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.857 INFO: Seed: 3847012611 00:06:29.857 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:29.857 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:29.857 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:29.857 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.857 #2 INITED exec/s: 0 rss: 66Mb 00:06:29.857 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.857 This may also happen if the target rejected all inputs we tried so far 00:06:29.857 [2024-10-05 17:55:51.153507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.857 [2024-10-05 17:55:51.153551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.857 [2024-10-05 17:55:51.153676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.857 [2024-10-05 17:55:51.153697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:30.115 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.115 #3 NEW cov: 12181 ft: 12181 corp: 2/19b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:30.115 [2024-10-05 17:55:51.484583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.115 [2024-10-05 17:55:51.484644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 [2024-10-05 17:55:51.484798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.115 [2024-10-05 17:55:51.484830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 #4 NEW cov: 12296 ft: 12778 corp: 3/37b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 CrossOver- 00:06:30.115 [2024-10-05 17:55:51.554495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.115 [2024-10-05 17:55:51.554529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 [2024-10-05 17:55:51.554665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.115 [2024-10-05 17:55:51.554688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 #5 NEW cov: 12302 ft: 13060 corp: 4/55b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 ChangeBinInt- 00:06:30.373 [2024-10-05 17:55:51.604686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.604721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.373 [2024-10-05 17:55:51.604856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.604879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.373 #6 NEW cov: 12387 ft: 13287 corp: 5/72b lim: 35 exec/s: 0 rss: 73Mb L: 17/18 MS: 1 EraseBytes- 00:06:30.373 [2024-10-05 17:55:51.674900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.674931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.373 [2024-10-05 17:55:51.675060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.675084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.373 #7 NEW cov: 12387 ft: 13374 corp: 6/88b lim: 35 exec/s: 0 rss: 73Mb L: 16/18 MS: 1 EraseBytes- 00:06:30.373 [2024-10-05 17:55:51.714892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.714923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.373 [2024-10-05 17:55:51.715051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.715072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.373 #8 NEW cov: 12387 ft: 13410 corp: 7/106b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 ChangeBit- 00:06:30.373 [2024-10-05 17:55:51.775178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.775214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.373 [2024-10-05 17:55:51.775350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.775372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.373 #9 NEW cov: 12387 ft: 13531 corp: 8/124b lim: 35 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 CopyPart- 00:06:30.373 [2024-10-05 17:55:51.825281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.825313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.373 [2024-10-05 17:55:51.825452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.373 [2024-10-05 17:55:51.825474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.631 #10 NEW cov: 12387 ft: 13558 corp: 9/140b lim: 35 exec/s: 0 rss: 74Mb L: 16/18 MS: 1 ChangeBit- 00:06:30.631 [2024-10-05 17:55:51.895542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:51.895575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.631 [2024-10-05 17:55:51.895706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:51.895727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.631 #16 NEW cov: 12387 ft: 13622 corp: 10/158b lim: 35 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 ShuffleBytes- 00:06:30.631 [2024-10-05 17:55:51.965783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:51.965817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.631 [2024-10-05 17:55:51.965951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:51.965975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.631 #17 NEW cov: 12387 ft: 13669 corp: 11/176b lim: 35 exec/s: 0 rss: 74Mb L: 18/18 MS: 1 ChangeBit- 00:06:30.631 [2024-10-05 17:55:52.016145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:52.016175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.631 [2024-10-05 17:55:52.016323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:52.016347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.631 [2024-10-05 17:55:52.016480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:52.016502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.631 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:30.631 #18 NEW cov: 12410 ft: 13929 corp: 12/199b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 CopyPart- 00:06:30.631 [2024-10-05 17:55:52.086092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:52.086124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.631 [2024-10-05 17:55:52.086254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.631 [2024-10-05 17:55:52.086276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.890 #19 NEW cov: 12410 ft: 13959 corp: 13/217b lim: 35 exec/s: 0 rss: 74Mb L: 18/23 MS: 1 CopyPart- 00:06:30.890 [2024-10-05 17:55:52.135967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.135997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.890 #20 NEW cov: 12410 ft: 14660 corp: 14/224b lim: 35 exec/s: 20 rss: 74Mb L: 7/23 MS: 1 CrossOver- 00:06:30.890 [2024-10-05 17:55:52.207090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.207120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.890 [2024-10-05 17:55:52.207245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.207269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.890 [2024-10-05 17:55:52.207398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.207418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.890 [2024-10-05 17:55:52.207546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.207565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.890 #26 NEW cov: 12410 ft: 15064 corp: 15/252b lim: 35 exec/s: 26 rss: 74Mb L: 28/28 MS: 1 CrossOver- 00:06:30.890 [2024-10-05 17:55:52.276341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.276374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.890 #27 NEW cov: 12410 ft: 15072 corp: 16/262b lim: 35 exec/s: 27 rss: 74Mb L: 10/28 MS: 1 EraseBytes- 00:06:30.890 [2024-10-05 17:55:52.326725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.326755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.890 [2024-10-05 17:55:52.326905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.890 [2024-10-05 17:55:52.326928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.890 #28 NEW cov: 12410 ft: 15115 corp: 17/280b lim: 35 exec/s: 28 rss: 74Mb L: 18/28 MS: 1 ChangeBinInt- 00:06:31.148 [2024-10-05 17:55:52.376930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.376960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.148 [2024-10-05 17:55:52.377090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.377111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.148 #29 NEW cov: 12410 ft: 15125 corp: 18/299b lim: 35 exec/s: 29 rss: 74Mb L: 19/28 MS: 1 InsertByte- 00:06:31.148 [2024-10-05 17:55:52.427035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.427066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.148 [2024-10-05 17:55:52.427198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.427218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.148 #30 NEW cov: 12410 ft: 15168 corp: 19/314b lim: 35 exec/s: 30 rss: 74Mb L: 15/28 MS: 1 EraseBytes- 00:06:31.148 [2024-10-05 17:55:52.477217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.477246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.148 [2024-10-05 17:55:52.477368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.477389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.148 #31 NEW cov: 12410 ft: 15240 corp: 20/332b lim: 35 exec/s: 31 rss: 74Mb L: 18/28 MS: 1 ChangeByte- 00:06:31.148 [2024-10-05 17:55:52.547513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.547548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.148 [2024-10-05 17:55:52.547690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.547714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.148 #32 NEW cov: 12410 ft: 15295 corp: 21/348b lim: 35 exec/s: 32 rss: 74Mb L: 16/28 MS: 1 CopyPart- 00:06:31.148 [2024-10-05 17:55:52.597624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.597660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.148 [2024-10-05 17:55:52.597789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.148 [2024-10-05 17:55:52.597812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.407 #33 NEW cov: 12410 ft: 15306 corp: 22/366b lim: 35 exec/s: 33 rss: 74Mb L: 18/28 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:31.407 [2024-10-05 17:55:52.667939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.667971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.407 [2024-10-05 17:55:52.668108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.668134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.407 #34 NEW cov: 12410 ft: 15337 corp: 23/384b lim: 35 exec/s: 34 rss: 74Mb L: 18/28 MS: 1 ShuffleBytes- 00:06:31.407 [2024-10-05 17:55:52.718646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.718680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.407 [2024-10-05 17:55:52.718814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.718840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.407 [2024-10-05 17:55:52.718962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:6 cdw10:80000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.718984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.407 [2024-10-05 17:55:52.719113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.719132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.407 NEW_FUNC[1/1]: 0x4705d8 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:06:31.407 #35 NEW cov: 12440 ft: 15388 corp: 24/412b lim: 35 exec/s: 35 rss: 74Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:31.407 [2024-10-05 17:55:52.767856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.767889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.407 #36 NEW cov: 12440 ft: 15450 corp: 25/420b lim: 35 exec/s: 36 rss: 74Mb L: 8/28 MS: 1 InsertByte- 00:06:31.407 [2024-10-05 17:55:52.838081] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.407 [2024-10-05 17:55:52.838114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.665 #37 NEW cov: 12440 ft: 15461 corp: 26/431b lim: 35 exec/s: 37 rss: 75Mb L: 11/28 MS: 1 EraseBytes- 00:06:31.665 [2024-10-05 17:55:52.909180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:52.909217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.665 [2024-10-05 17:55:52.909340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:52.909362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.665 [2024-10-05 17:55:52.909497] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:52.909517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.665 [2024-10-05 17:55:52.909651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:52.909673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.665 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:31.665 #38 NEW cov: 12450 ft: 15482 corp: 27/460b lim: 35 exec/s: 38 rss: 75Mb L: 29/29 MS: 1 CrossOver- 00:06:31.665 [2024-10-05 17:55:52.958434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:52.958466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.665 #39 NEW cov: 12450 ft: 15493 corp: 28/471b lim: 35 exec/s: 39 rss: 75Mb L: 11/29 MS: 1 ShuffleBytes- 00:06:31.665 [2024-10-05 17:55:53.028899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.665 [2024-10-05 17:55:53.028929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.666 [2024-10-05 17:55:53.029060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.666 [2024-10-05 17:55:53.029083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.666 #40 NEW cov: 12450 ft: 15494 corp: 29/486b lim: 35 exec/s: 40 rss: 75Mb L: 15/29 MS: 1 EraseBytes- 00:06:31.666 [2024-10-05 17:55:53.099441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.666 [2024-10-05 17:55:53.099471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.666 [2024-10-05 17:55:53.099602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.666 [2024-10-05 17:55:53.099626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.666 [2024-10-05 17:55:53.099755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000006b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.666 [2024-10-05 17:55:53.099771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.924 #41 NEW cov: 12450 ft: 15517 corp: 30/507b lim: 35 exec/s: 20 rss: 75Mb L: 21/29 MS: 1 InsertRepeatedBytes- 00:06:31.924 #41 DONE cov: 12450 ft: 15517 corp: 30/507b lim: 35 exec/s: 20 rss: 75Mb 00:06:31.924 ###### Recommended dictionary. ###### 00:06:31.924 "\000\000\000\000\000\000\000\000" # Uses: 0 00:06:31.924 ###### End of recommended dictionary. ###### 00:06:31.924 Done 41 runs in 2 second(s) 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.924 17:55:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:31.924 [2024-10-05 17:55:53.313252] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:31.924 [2024-10-05 17:55:53.313324] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484015 ] 00:06:32.182 [2024-10-05 17:55:53.488675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.182 [2024-10-05 17:55:53.553409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.182 [2024-10-05 17:55:53.612009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.182 [2024-10-05 17:55:53.628350] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:32.182 INFO: Running with entropic power schedule (0xFF, 100). 00:06:32.182 INFO: Seed: 2093051388 00:06:32.440 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:32.440 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:32.440 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:32.440 INFO: A corpus is not provided, starting from an empty corpus 00:06:32.440 #2 INITED exec/s: 0 rss: 66Mb 00:06:32.440 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:32.440 This may also happen if the target rejected all inputs we tried so far 00:06:32.440 [2024-10-05 17:55:53.674057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.441 [2024-10-05 17:55:53.674087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.441 [2024-10-05 17:55:53.674145] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.441 [2024-10-05 17:55:53.674159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.441 [2024-10-05 17:55:53.674219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.441 [2024-10-05 17:55:53.674233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.699 NEW_FUNC[1/714]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:32.699 NEW_FUNC[2/714]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:32.699 #8 NEW cov: 12166 ft: 12179 corp: 2/35b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:32.699 [2024-10-05 17:55:53.984703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.699 [2024-10-05 17:55:53.984736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.699 [2024-10-05 17:55:53.984796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.699 [2024-10-05 17:55:53.984809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.699 [2024-10-05 17:55:53.984866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.699 [2024-10-05 17:55:53.984880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.699 NEW_FUNC[1/1]: 0x1f8f2a8 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:957 00:06:32.699 #12 NEW cov: 12298 ft: 13181 corp: 3/62b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 4 InsertByte-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:06:32.699 [2024-10-05 17:55:54.024724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.699 [2024-10-05 17:55:54.024752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.699 [2024-10-05 17:55:54.024869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.699 [2024-10-05 17:55:54.024885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.700 NEW_FUNC[1/1]: 0x46dbb8 in feat_error_recover /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:304 00:06:32.700 #13 NEW cov: 12331 ft: 13460 corp: 4/89b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 ChangeBinInt- 00:06:32.700 [2024-10-05 17:55:54.084823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.084849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.700 [2024-10-05 17:55:54.084910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.084924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.700 [2024-10-05 17:55:54.084980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.084994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.700 #24 NEW cov: 12416 ft: 13674 corp: 5/116b lim: 35 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 ChangeBit- 00:06:32.700 [2024-10-05 17:55:54.125191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.125218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.700 [2024-10-05 17:55:54.125277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.125292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.700 [2024-10-05 17:55:54.125351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.700 [2024-10-05 17:55:54.125365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.958 #25 NEW cov: 12416 ft: 13788 corp: 6/150b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:06:32.958 [2024-10-05 17:55:54.185460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.185486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.958 [2024-10-05 17:55:54.185596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.185611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.958 [2024-10-05 17:55:54.185665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.185678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.958 #26 NEW cov: 12416 ft: 14097 corp: 7/185b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:32.958 [2024-10-05 17:55:54.245355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.245382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.958 [2024-10-05 17:55:54.245506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.245538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.958 #27 NEW cov: 12416 ft: 14146 corp: 8/212b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 ChangeBit- 00:06:32.958 [2024-10-05 17:55:54.305682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.305709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.958 [2024-10-05 17:55:54.305766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.305782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.958 [2024-10-05 17:55:54.305840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.305853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.958 #28 NEW cov: 12416 ft: 14311 corp: 9/246b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:32.958 [2024-10-05 17:55:54.345621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.958 [2024-10-05 17:55:54.345648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.959 [2024-10-05 17:55:54.345761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.959 [2024-10-05 17:55:54.345776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.959 #29 NEW cov: 12416 ft: 14394 corp: 10/273b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 ShuffleBytes- 00:06:32.959 [2024-10-05 17:55:54.385570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.959 [2024-10-05 17:55:54.385595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.959 #30 NEW cov: 12416 ft: 14670 corp: 11/290b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 InsertRepeatedBytes- 00:06:33.217 [2024-10-05 17:55:54.426031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.426057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.217 [2024-10-05 17:55:54.426112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.426125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.217 [2024-10-05 17:55:54.426181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.426205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.217 #31 NEW cov: 12416 ft: 14727 corp: 12/324b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:06:33.217 [2024-10-05 17:55:54.466113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.466141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.217 [2024-10-05 17:55:54.466209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.466225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.217 [2024-10-05 17:55:54.466281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.217 [2024-10-05 17:55:54.466296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.218 #32 NEW cov: 12416 ft: 14749 corp: 13/358b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:33.218 [2024-10-05 17:55:54.526331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.526361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.526419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.526433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.526488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.526501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.218 #33 NEW cov: 12416 ft: 14784 corp: 14/392b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeByte- 00:06:33.218 [2024-10-05 17:55:54.566275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.566300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.566358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.566372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.566427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.566441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.218 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:33.218 #34 NEW cov: 12439 ft: 14858 corp: 15/419b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 CrossOver- 00:06:33.218 [2024-10-05 17:55:54.626376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.626402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.626462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.626477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.626534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.626548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.218 #35 NEW cov: 12439 ft: 14880 corp: 16/446b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 CrossOver- 00:06:33.218 [2024-10-05 17:55:54.666574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.666601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.666662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.666676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.666734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.666749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.218 [2024-10-05 17:55:54.666806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.218 [2024-10-05 17:55:54.666820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.477 #36 NEW cov: 12439 ft: 15069 corp: 17/478b lim: 35 exec/s: 36 rss: 74Mb L: 32/35 MS: 1 CopyPart- 00:06:33.477 [2024-10-05 17:55:54.706773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.706798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.706856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.706870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.706928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.706941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.477 #37 NEW cov: 12439 ft: 15122 corp: 18/512b lim: 35 exec/s: 37 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:33.477 [2024-10-05 17:55:54.766651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.766677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.477 #38 NEW cov: 12439 ft: 15169 corp: 19/529b lim: 35 exec/s: 38 rss: 74Mb L: 17/35 MS: 1 CMP- DE: "\005\000"- 00:06:33.477 [2024-10-05 17:55:54.827148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.827173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.827237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.827251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.827332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.827348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.477 #44 NEW cov: 12439 ft: 15179 corp: 20/563b lim: 35 exec/s: 44 rss: 74Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:33.477 [2024-10-05 17:55:54.887362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.887387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.887447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.887461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.887513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.887526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.477 [2024-10-05 17:55:54.887582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.477 [2024-10-05 17:55:54.887598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.477 #45 NEW cov: 12439 ft: 15193 corp: 21/598b lim: 35 exec/s: 45 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:33.742 [2024-10-05 17:55:54.947599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:54.947625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:54.947739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:54.947754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:54.947809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:54.947822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.742 #46 NEW cov: 12439 ft: 15214 corp: 22/633b lim: 35 exec/s: 46 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:33.742 [2024-10-05 17:55:55.007622] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.007648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.007707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.007721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.007777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.007791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.742 #47 NEW cov: 12439 ft: 15227 corp: 23/667b lim: 35 exec/s: 47 rss: 75Mb L: 34/35 MS: 1 PersAutoDict- DE: "\005\000"- 00:06:33.742 [2024-10-05 17:55:55.047523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.047549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.047607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.047621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.047678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.047693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.742 #48 NEW cov: 12439 ft: 15266 corp: 24/694b lim: 35 exec/s: 48 rss: 75Mb L: 27/35 MS: 1 ChangeBinInt- 00:06:33.742 [2024-10-05 17:55:55.107717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.107743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.107802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.107817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.107878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.107891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.742 #49 NEW cov: 12439 ft: 15271 corp: 25/721b lim: 35 exec/s: 49 rss: 75Mb L: 27/35 MS: 1 PersAutoDict- DE: "\005\000"- 00:06:33.742 [2024-10-05 17:55:55.168221] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.168253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.168322] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.168336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.168393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.168406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.742 [2024-10-05 17:55:55.168461] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.742 [2024-10-05 17:55:55.168475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.742 #50 NEW cov: 12439 ft: 15276 corp: 26/756b lim: 35 exec/s: 50 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:34.001 [2024-10-05 17:55:55.208198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.208225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.208285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.208299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.208359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.208374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 #51 NEW cov: 12439 ft: 15304 corp: 27/790b lim: 35 exec/s: 51 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:06:34.002 [2024-10-05 17:55:55.248204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.248245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.248305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.248319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.248378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.248392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.248450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.248466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 #52 NEW cov: 12439 ft: 15340 corp: 28/823b lim: 35 exec/s: 52 rss: 75Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:34.002 [2024-10-05 17:55:55.308636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.308662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.308718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.308732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.308788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.308802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.308855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.308868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.002 #53 NEW cov: 12439 ft: 15369 corp: 29/858b lim: 35 exec/s: 53 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:34.002 [2024-10-05 17:55:55.348588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.348614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.348673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.348687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.348744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.348758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 #54 NEW cov: 12439 ft: 15450 corp: 30/892b lim: 35 exec/s: 54 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:06:34.002 [2024-10-05 17:55:55.388665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.388692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.388751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.388764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.388820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.388834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 #55 NEW cov: 12439 ft: 15488 corp: 31/926b lim: 35 exec/s: 55 rss: 75Mb L: 34/35 MS: 1 CMP- DE: "\036\000"- 00:06:34.002 [2024-10-05 17:55:55.428778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.428804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.428867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.428881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.428939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.428953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.002 [2024-10-05 17:55:55.429012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.002 [2024-10-05 17:55:55.429025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.002 #56 NEW cov: 12439 ft: 15504 corp: 32/957b lim: 35 exec/s: 56 rss: 75Mb L: 31/35 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:34.261 [2024-10-05 17:55:55.468804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.261 [2024-10-05 17:55:55.468830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.261 [2024-10-05 17:55:55.468886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.261 [2024-10-05 17:55:55.468900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.261 [2024-10-05 17:55:55.468956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.468970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.262 #57 NEW cov: 12439 ft: 15519 corp: 33/984b lim: 35 exec/s: 57 rss: 75Mb L: 27/35 MS: 1 ChangeBinInt- 00:06:34.262 [2024-10-05 17:55:55.509181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.509218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.509284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.509298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.509354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.509367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.509427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.509440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.262 #58 NEW cov: 12439 ft: 15523 corp: 34/1019b lim: 35 exec/s: 58 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:34.262 [2024-10-05 17:55:55.569204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.569239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.569310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.569326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.569388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.569403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.569463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.569477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.262 #59 NEW cov: 12439 ft: 15541 corp: 35/1050b lim: 35 exec/s: 59 rss: 75Mb L: 31/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:34.262 [2024-10-05 17:55:55.609210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.609237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.609296] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.609310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.262 #60 NEW cov: 12439 ft: 15550 corp: 36/1076b lim: 35 exec/s: 60 rss: 75Mb L: 26/35 MS: 1 EraseBytes- 00:06:34.262 [2024-10-05 17:55:55.669303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000071f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.669330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.669389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.669403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.262 [2024-10-05 17:55:55.669462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.262 [2024-10-05 17:55:55.669476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.262 #61 NEW cov: 12439 ft: 15554 corp: 37/1103b lim: 35 exec/s: 30 rss: 75Mb L: 27/35 MS: 1 PersAutoDict- DE: "\005\000"- 00:06:34.262 #61 DONE cov: 12439 ft: 15554 corp: 37/1103b lim: 35 exec/s: 30 rss: 75Mb 00:06:34.262 ###### Recommended dictionary. ###### 00:06:34.262 "\005\000" # Uses: 3 00:06:34.262 "\036\000" # Uses: 0 00:06:34.262 "\000\000\000\000" # Uses: 1 00:06:34.262 ###### End of recommended dictionary. ###### 00:06:34.262 Done 61 runs in 2 second(s) 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.521 17:55:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:34.521 [2024-10-05 17:55:55.861352] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:34.521 [2024-10-05 17:55:55.861424] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484446 ] 00:06:34.779 [2024-10-05 17:55:56.043268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.779 [2024-10-05 17:55:56.109794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.779 [2024-10-05 17:55:56.168396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.779 [2024-10-05 17:55:56.184749] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:34.779 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.779 INFO: Seed: 356087449 00:06:34.779 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:34.779 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:34.779 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:34.779 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.779 #2 INITED exec/s: 0 rss: 65Mb 00:06:34.779 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.779 This may also happen if the target rejected all inputs we tried so far 00:06:34.780 [2024-10-05 17:55:56.240386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.780 [2024-10-05 17:55:56.240419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:34.780 [2024-10-05 17:55:56.240459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.780 [2024-10-05 17:55:56.240474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:34.780 [2024-10-05 17:55:56.240531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.780 [2024-10-05 17:55:56.240547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:34.780 [2024-10-05 17:55:56.240605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.780 [2024-10-05 17:55:56.240620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.296 NEW_FUNC[1/714]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:35.296 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:35.296 #3 NEW cov: 12273 ft: 12269 corp: 2/95b lim: 105 exec/s: 0 rss: 73Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:06:35.296 [2024-10-05 17:55:56.570812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146404215145 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.570847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.296 NEW_FUNC[1/1]: 0x1916cd8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1539 00:06:35.296 #5 NEW cov: 12388 ft: 13433 corp: 3/117b lim: 105 exec/s: 0 rss: 73Mb L: 22/94 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:35.296 [2024-10-05 17:55:56.610857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718148283263337 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.610888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.296 #6 NEW cov: 12394 ft: 13582 corp: 4/139b lim: 105 exec/s: 0 rss: 73Mb L: 22/94 MS: 1 ChangeByte- 00:06:35.296 [2024-10-05 17:55:56.671276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217028215213064963 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.671306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.296 [2024-10-05 17:55:56.671346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.671362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.296 [2024-10-05 17:55:56.671418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.671434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.296 #7 NEW cov: 12479 ft: 14138 corp: 5/205b lim: 105 exec/s: 0 rss: 73Mb L: 66/94 MS: 1 CrossOver- 00:06:35.296 [2024-10-05 17:55:56.731195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718148283263337 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.296 [2024-10-05 17:55:56.731224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.554 #8 NEW cov: 12479 ft: 14393 corp: 6/227b lim: 105 exec/s: 0 rss: 73Mb L: 22/94 MS: 1 CopyPart- 00:06:35.555 [2024-10-05 17:55:56.791684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217133106904367875 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.791713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.791765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.791782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.791838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.791854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.791911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.791930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.555 #9 NEW cov: 12479 ft: 14537 corp: 7/329b lim: 105 exec/s: 0 rss: 73Mb L: 102/102 MS: 1 CrossOver- 00:06:35.555 [2024-10-05 17:55:56.831692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913692490105 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.831720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.831769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.831785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.831844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.831877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.555 #12 NEW cov: 12479 ft: 14625 corp: 8/396b lim: 105 exec/s: 0 rss: 73Mb L: 67/102 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:35.555 [2024-10-05 17:55:56.871830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.871859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.871901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.871917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.871974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.871991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.555 #13 NEW cov: 12479 ft: 14641 corp: 9/462b lim: 105 exec/s: 0 rss: 74Mb L: 66/102 MS: 1 ChangeBinInt- 00:06:35.555 [2024-10-05 17:55:56.932103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217133106904367875 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.932131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.932194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.932210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.932267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.932283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.932344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.932359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:35.555 #14 NEW cov: 12479 ft: 14690 corp: 10/564b lim: 105 exec/s: 0 rss: 74Mb L: 102/102 MS: 1 ShuffleBytes- 00:06:35.555 [2024-10-05 17:55:56.992145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.992174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.992232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.992249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.555 [2024-10-05 17:55:56.992303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.555 [2024-10-05 17:55:56.992318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.813 #15 NEW cov: 12479 ft: 14720 corp: 11/630b lim: 105 exec/s: 0 rss: 74Mb L: 66/102 MS: 1 ChangeBit- 00:06:35.813 [2024-10-05 17:55:57.052327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.052357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.813 [2024-10-05 17:55:57.052397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.052413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.813 [2024-10-05 17:55:57.052471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.052488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.813 #16 NEW cov: 12479 ft: 14747 corp: 12/697b lim: 105 exec/s: 0 rss: 74Mb L: 67/102 MS: 1 InsertByte- 00:06:35.813 [2024-10-05 17:55:57.112556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.112584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.813 [2024-10-05 17:55:57.112622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.112640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.813 [2024-10-05 17:55:57.112700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.813 [2024-10-05 17:55:57.112718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:35.813 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:35.814 #17 NEW cov: 12502 ft: 14799 corp: 13/763b lim: 105 exec/s: 0 rss: 74Mb L: 66/102 MS: 1 CopyPart- 00:06:35.814 [2024-10-05 17:55:57.152360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718147998050665 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.814 [2024-10-05 17:55:57.152389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.814 #18 NEW cov: 12502 ft: 14850 corp: 14/785b lim: 105 exec/s: 0 rss: 74Mb L: 22/102 MS: 1 CrossOver- 00:06:35.814 [2024-10-05 17:55:57.192500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146404215145 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.814 [2024-10-05 17:55:57.192533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.814 #19 NEW cov: 12502 ft: 14942 corp: 15/806b lim: 105 exec/s: 0 rss: 74Mb L: 21/102 MS: 1 EraseBytes- 00:06:35.814 [2024-10-05 17:55:57.232741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217028215213064963 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.814 [2024-10-05 17:55:57.232768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:35.814 [2024-10-05 17:55:57.232807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.814 [2024-10-05 17:55:57.232823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:35.814 #20 NEW cov: 12502 ft: 15246 corp: 16/848b lim: 105 exec/s: 20 rss: 74Mb L: 42/102 MS: 1 EraseBytes- 00:06:35.814 [2024-10-05 17:55:57.272722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.814 [2024-10-05 17:55:57.272750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.072 #26 NEW cov: 12502 ft: 15257 corp: 17/882b lim: 105 exec/s: 26 rss: 74Mb L: 34/102 MS: 1 EraseBytes- 00:06:36.072 [2024-10-05 17:55:57.332929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146404215145 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.072 [2024-10-05 17:55:57.332958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.072 #27 NEW cov: 12502 ft: 15337 corp: 18/904b lim: 105 exec/s: 27 rss: 74Mb L: 22/102 MS: 1 ChangeBinInt- 00:06:36.073 [2024-10-05 17:55:57.373011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595620291748391273 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.373040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.073 #28 NEW cov: 12502 ft: 15384 corp: 19/927b lim: 105 exec/s: 28 rss: 74Mb L: 23/102 MS: 1 InsertByte- 00:06:36.073 [2024-10-05 17:55:57.413415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.413443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.073 [2024-10-05 17:55:57.413494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.413511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.073 [2024-10-05 17:55:57.413569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.413586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.073 #29 NEW cov: 12502 ft: 15388 corp: 20/993b lim: 105 exec/s: 29 rss: 74Mb L: 66/102 MS: 1 CrossOver- 00:06:36.073 [2024-10-05 17:55:57.473495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.473524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.073 [2024-10-05 17:55:57.473563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.473583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.073 #30 NEW cov: 12502 ft: 15408 corp: 21/1055b lim: 105 exec/s: 30 rss: 74Mb L: 62/102 MS: 1 EraseBytes- 00:06:36.073 [2024-10-05 17:55:57.513429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595620257388652905 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.073 [2024-10-05 17:55:57.513458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 #31 NEW cov: 12502 ft: 15421 corp: 22/1078b lim: 105 exec/s: 31 rss: 74Mb L: 23/102 MS: 1 ChangeBit- 00:06:36.332 [2024-10-05 17:55:57.573880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.573907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.573945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.573962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.574018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.574033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.332 #32 NEW cov: 12502 ft: 15453 corp: 23/1144b lim: 105 exec/s: 32 rss: 74Mb L: 66/102 MS: 1 ChangeBit- 00:06:36.332 [2024-10-05 17:55:57.633736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718148283263337 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.633766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 #33 NEW cov: 12502 ft: 15471 corp: 24/1166b lim: 105 exec/s: 33 rss: 74Mb L: 22/102 MS: 1 ShuffleBytes- 00:06:36.332 [2024-10-05 17:55:57.673875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.673903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 #34 NEW cov: 12502 ft: 15488 corp: 25/1202b lim: 105 exec/s: 34 rss: 74Mb L: 36/102 MS: 1 CrossOver- 00:06:36.332 [2024-10-05 17:55:57.714264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:228850164234912515 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.714293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.714343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.714359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.714419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.714435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.332 #35 NEW cov: 12502 ft: 15515 corp: 26/1268b lim: 105 exec/s: 35 rss: 74Mb L: 66/102 MS: 1 ChangeByte- 00:06:36.332 [2024-10-05 17:55:57.774575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.774609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.774648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.774664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.774723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217161256002585347 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.774740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.332 [2024-10-05 17:55:57.774799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.332 [2024-10-05 17:55:57.774815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.591 #36 NEW cov: 12502 ft: 15525 corp: 27/1362b lim: 105 exec/s: 36 rss: 74Mb L: 94/102 MS: 1 CopyPart- 00:06:36.591 [2024-10-05 17:55:57.814681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:217020518631670531 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.814710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.591 [2024-10-05 17:55:57.814763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.814780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.591 [2024-10-05 17:55:57.814834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.814851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.591 [2024-10-05 17:55:57.814909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.814924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:36.591 #37 NEW cov: 12502 ft: 15541 corp: 28/1456b lim: 105 exec/s: 37 rss: 74Mb L: 94/102 MS: 1 ShuffleBytes- 00:06:36.591 [2024-10-05 17:55:57.854384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595620257388652905 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.854413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.591 #38 NEW cov: 12502 ft: 15553 corp: 29/1479b lim: 105 exec/s: 38 rss: 75Mb L: 23/102 MS: 1 CopyPart- 00:06:36.591 [2024-10-05 17:55:57.914503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718146404215145 len:38551 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.914532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.591 #39 NEW cov: 12502 ft: 15571 corp: 30/1501b lim: 105 exec/s: 39 rss: 75Mb L: 22/102 MS: 1 ChangeBinInt- 00:06:36.591 [2024-10-05 17:55:57.954631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18408016426790682623 len:26986 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:57.954659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.591 #40 NEW cov: 12502 ft: 15587 corp: 31/1522b lim: 105 exec/s: 40 rss: 75Mb L: 21/102 MS: 1 CMP- DE: "\377\377\377v"- 00:06:36.591 [2024-10-05 17:55:58.015075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:234761138745836291 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:58.015104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.591 [2024-10-05 17:55:58.015148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:58.015164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.591 [2024-10-05 17:55:58.015228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217161256002585347 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.591 [2024-10-05 17:55:58.015247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.850 #41 NEW cov: 12502 ft: 15648 corp: 32/1589b lim: 105 exec/s: 41 rss: 75Mb L: 67/102 MS: 1 EraseBytes- 00:06:36.850 [2024-10-05 17:55:58.075020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18408016426790682623 len:27035 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.850 [2024-10-05 17:55:58.075047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.850 #42 NEW cov: 12502 ft: 15774 corp: 33/1610b lim: 105 exec/s: 42 rss: 75Mb L: 21/102 MS: 1 ChangeBinInt- 00:06:36.850 [2024-10-05 17:55:58.135164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7595718147998050665 len:26881 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.850 [2024-10-05 17:55:58.135201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.850 #43 NEW cov: 12502 ft: 15783 corp: 34/1632b lim: 105 exec/s: 43 rss: 75Mb L: 22/102 MS: 1 ChangeBinInt- 00:06:36.850 [2024-10-05 17:55:58.195544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:228850164234912515 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.850 [2024-10-05 17:55:58.195572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:36.850 [2024-10-05 17:55:58.195621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:217020518514230019 len:772 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.850 [2024-10-05 17:55:58.195637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:36.850 [2024-10-05 17:55:58.195694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:217020518514230019 len:862 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.850 [2024-10-05 17:55:58.195727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:36.850 #44 NEW cov: 12502 ft: 15854 corp: 35/1698b lim: 105 exec/s: 22 rss: 75Mb L: 66/102 MS: 1 ChangeByte- 00:06:36.850 #44 DONE cov: 12502 ft: 15854 corp: 35/1698b lim: 105 exec/s: 22 rss: 75Mb 00:06:36.850 ###### Recommended dictionary. ###### 00:06:36.850 "\377\377\377v" # Uses: 0 00:06:36.850 ###### End of recommended dictionary. ###### 00:06:36.850 Done 44 runs in 2 second(s) 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.111 17:55:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:37.111 [2024-10-05 17:55:58.407542] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:37.111 [2024-10-05 17:55:58.407633] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484838 ] 00:06:37.368 [2024-10-05 17:55:58.584772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.368 [2024-10-05 17:55:58.651510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.368 [2024-10-05 17:55:58.710778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.368 [2024-10-05 17:55:58.727147] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:37.369 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.369 INFO: Seed: 2897080684 00:06:37.369 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:37.369 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:37.369 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:37.369 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.369 #2 INITED exec/s: 0 rss: 65Mb 00:06:37.369 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:37.369 This may also happen if the target rejected all inputs we tried so far 00:06:37.369 [2024-10-05 17:55:58.792363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.369 [2024-10-05 17:55:58.792394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:37.885 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.885 #6 NEW cov: 12296 ft: 12294 corp: 2/40b lim: 120 exec/s: 0 rss: 73Mb L: 39/39 MS: 4 InsertByte-CopyPart-InsertByte-InsertRepeatedBytes- 00:06:37.885 [2024-10-05 17:55:59.113357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12133085940521001313 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.113428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 #7 NEW cov: 12409 ft: 13055 corp: 3/80b lim: 120 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:06:37.885 [2024-10-05 17:55:59.173274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.173304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 #8 NEW cov: 12415 ft: 13323 corp: 4/127b lim: 120 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:37.885 [2024-10-05 17:55:59.213475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.213503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 [2024-10-05 17:55:59.213539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.213554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.885 #13 NEW cov: 12500 ft: 14326 corp: 5/188b lim: 120 exec/s: 0 rss: 73Mb L: 61/61 MS: 5 ShuffleBytes-InsertByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:37.885 [2024-10-05 17:55:59.253518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.253546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 #19 NEW cov: 12500 ft: 14434 corp: 6/235b lim: 120 exec/s: 0 rss: 73Mb L: 47/61 MS: 1 ChangeBit- 00:06:37.885 [2024-10-05 17:55:59.313816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.313844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:37.885 [2024-10-05 17:55:59.313877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.885 [2024-10-05 17:55:59.313893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:37.885 #20 NEW cov: 12500 ft: 14512 corp: 7/288b lim: 120 exec/s: 0 rss: 73Mb L: 53/61 MS: 1 CopyPart- 00:06:38.143 [2024-10-05 17:55:59.353899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.353927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.143 [2024-10-05 17:55:59.353978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.353993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.143 #21 NEW cov: 12500 ft: 14592 corp: 8/349b lim: 120 exec/s: 0 rss: 73Mb L: 61/61 MS: 1 ChangeByte- 00:06:38.143 [2024-10-05 17:55:59.414102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.414130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.143 [2024-10-05 17:55:59.414167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.414192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.143 #22 NEW cov: 12500 ft: 14618 corp: 9/410b lim: 120 exec/s: 0 rss: 73Mb L: 61/61 MS: 1 ShuffleBytes- 00:06:38.143 [2024-10-05 17:55:59.454083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.454112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.143 #23 NEW cov: 12500 ft: 14669 corp: 10/454b lim: 120 exec/s: 0 rss: 73Mb L: 44/61 MS: 1 EraseBytes- 00:06:38.143 [2024-10-05 17:55:59.514244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016890794100023649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.514272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.143 #24 NEW cov: 12500 ft: 14701 corp: 11/501b lim: 120 exec/s: 0 rss: 73Mb L: 47/61 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:38.143 [2024-10-05 17:55:59.554336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016890794100023649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.143 [2024-10-05 17:55:59.554364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.143 #25 NEW cov: 12500 ft: 14768 corp: 12/548b lim: 120 exec/s: 0 rss: 74Mb L: 47/61 MS: 1 ChangeByte- 00:06:38.401 [2024-10-05 17:55:59.614677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.614705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.614759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.614776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.401 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:38.401 #26 NEW cov: 12523 ft: 14789 corp: 13/609b lim: 120 exec/s: 0 rss: 74Mb L: 61/61 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\013"- 00:06:38.401 [2024-10-05 17:55:59.675150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.675178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.675228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.675244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.675297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.675330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.675385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6944656592455360559 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.675399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.401 #27 NEW cov: 12523 ft: 15253 corp: 14/712b lim: 120 exec/s: 0 rss: 74Mb L: 103/103 MS: 1 CopyPart- 00:06:38.401 [2024-10-05 17:55:59.714771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070102450175 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.714800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.401 #30 NEW cov: 12523 ft: 15285 corp: 15/746b lim: 120 exec/s: 0 rss: 74Mb L: 34/103 MS: 3 ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:06:38.401 [2024-10-05 17:55:59.754885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070102450175 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.754913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.401 #31 NEW cov: 12523 ft: 15341 corp: 16/781b lim: 120 exec/s: 31 rss: 74Mb L: 35/103 MS: 1 InsertByte- 00:06:38.401 [2024-10-05 17:55:59.815505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.815534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.815583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.815599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.815652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.815669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.401 [2024-10-05 17:55:59.815719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6944656592455360559 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.401 [2024-10-05 17:55:59.815735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.401 #32 NEW cov: 12523 ft: 15358 corp: 17/884b lim: 120 exec/s: 32 rss: 74Mb L: 103/103 MS: 1 CopyPart- 00:06:38.660 [2024-10-05 17:55:59.875235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12133085940521001313 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.875263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.660 #33 NEW cov: 12523 ft: 15379 corp: 18/924b lim: 120 exec/s: 33 rss: 74Mb L: 40/103 MS: 1 CrossOver- 00:06:38.660 [2024-10-05 17:55:59.935845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.935873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.660 [2024-10-05 17:55:59.935915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.935930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.660 [2024-10-05 17:55:59.935983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.935999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.660 [2024-10-05 17:55:59.936056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14943049530665361455 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.936075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.660 #34 NEW cov: 12523 ft: 15434 corp: 19/1028b lim: 120 exec/s: 34 rss: 74Mb L: 104/104 MS: 1 InsertByte- 00:06:38.660 [2024-10-05 17:55:59.975668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7025159125835866465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.975696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.660 [2024-10-05 17:55:59.975742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:55:59.975759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.660 #35 NEW cov: 12523 ft: 15460 corp: 20/1076b lim: 120 exec/s: 35 rss: 74Mb L: 48/104 MS: 1 InsertByte- 00:06:38.660 [2024-10-05 17:56:00.035855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:56:00.035884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.660 [2024-10-05 17:56:00.035930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:56:00.035947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.660 #36 NEW cov: 12523 ft: 15471 corp: 21/1126b lim: 120 exec/s: 36 rss: 74Mb L: 50/104 MS: 1 CrossOver- 00:06:38.660 [2024-10-05 17:56:00.075828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:56:00.075859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.660 #37 NEW cov: 12523 ft: 15474 corp: 22/1165b lim: 120 exec/s: 37 rss: 74Mb L: 39/104 MS: 1 ChangeBinInt- 00:06:38.660 [2024-10-05 17:56:00.115940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12133085940521001313 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.660 [2024-10-05 17:56:00.115970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.917 #38 NEW cov: 12523 ft: 15575 corp: 23/1205b lim: 120 exec/s: 38 rss: 74Mb L: 40/104 MS: 1 CMP- DE: "\376\377\377\365"- 00:06:38.917 [2024-10-05 17:56:00.156470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.156500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.917 [2024-10-05 17:56:00.156547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.156563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.917 [2024-10-05 17:56:00.156618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.156634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.917 [2024-10-05 17:56:00.156689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6944656592455360559 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.156708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:38.917 #39 NEW cov: 12523 ft: 15591 corp: 24/1308b lim: 120 exec/s: 39 rss: 74Mb L: 103/104 MS: 1 ShuffleBytes- 00:06:38.917 [2024-10-05 17:56:00.196168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:19298 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.196203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.917 #40 NEW cov: 12523 ft: 15614 corp: 25/1348b lim: 120 exec/s: 40 rss: 74Mb L: 40/104 MS: 1 InsertByte- 00:06:38.917 [2024-10-05 17:56:00.256604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12133085940521001313 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.256633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.917 [2024-10-05 17:56:00.256666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.256682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.917 [2024-10-05 17:56:00.256737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446743395104718847 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.917 [2024-10-05 17:56:00.256754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:38.917 #41 NEW cov: 12523 ft: 15958 corp: 26/1420b lim: 120 exec/s: 41 rss: 74Mb L: 72/104 MS: 1 CrossOver- 00:06:38.917 [2024-10-05 17:56:00.316663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.918 [2024-10-05 17:56:00.316692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-10-05 17:56:00.316744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.918 [2024-10-05 17:56:00.316759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 #42 NEW cov: 12523 ft: 16045 corp: 27/1481b lim: 120 exec/s: 42 rss: 74Mb L: 61/104 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:06:38.918 [2024-10-05 17:56:00.356735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72057594206380385 len:257 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.918 [2024-10-05 17:56:00.356764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:38.918 [2024-10-05 17:56:00.356801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24882 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.918 [2024-10-05 17:56:00.356817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:38.918 #43 NEW cov: 12523 ft: 16072 corp: 28/1532b lim: 120 exec/s: 43 rss: 74Mb L: 51/104 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:39.175 [2024-10-05 17:56:00.396872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979823456 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.396901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.175 [2024-10-05 17:56:00.396951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.396967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.175 #44 NEW cov: 12523 ft: 16119 corp: 29/1593b lim: 120 exec/s: 44 rss: 74Mb L: 61/104 MS: 1 ChangeByte- 00:06:39.175 [2024-10-05 17:56:00.436853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.436882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.175 #45 NEW cov: 12523 ft: 16150 corp: 30/1632b lim: 120 exec/s: 45 rss: 74Mb L: 39/104 MS: 1 ChangeBinInt- 00:06:39.175 [2024-10-05 17:56:00.477098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.477128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.175 [2024-10-05 17:56:00.477162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.477177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.175 #46 NEW cov: 12523 ft: 16159 corp: 31/1687b lim: 120 exec/s: 46 rss: 75Mb L: 55/104 MS: 1 CMP- DE: "\000\037"- 00:06:39.175 [2024-10-05 17:56:00.537121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2738188569834160127 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.537149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.175 #47 NEW cov: 12523 ft: 16167 corp: 32/1723b lim: 120 exec/s: 47 rss: 75Mb L: 36/104 MS: 1 InsertByte- 00:06:39.175 [2024-10-05 17:56:00.597730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6944656593979793504 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.175 [2024-10-05 17:56:00.597759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.176 [2024-10-05 17:56:00.597805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.176 [2024-10-05 17:56:00.597821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:39.176 [2024-10-05 17:56:00.597874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.176 [2024-10-05 17:56:00.597888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:39.176 [2024-10-05 17:56:00.597941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6944656592455360608 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.176 [2024-10-05 17:56:00.597957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:39.176 #48 NEW cov: 12523 ft: 16170 corp: 33/1836b lim: 120 exec/s: 48 rss: 75Mb L: 113/113 MS: 1 CopyPart- 00:06:39.176 [2024-10-05 17:56:00.637419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016890794100023649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.176 [2024-10-05 17:56:00.637448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.434 #49 NEW cov: 12523 ft: 16196 corp: 34/1883b lim: 120 exec/s: 49 rss: 75Mb L: 47/113 MS: 1 ChangeBinInt- 00:06:39.434 [2024-10-05 17:56:00.677455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7016996763828117857 len:24930 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.434 [2024-10-05 17:56:00.677484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.434 #50 NEW cov: 12523 ft: 16203 corp: 35/1926b lim: 120 exec/s: 50 rss: 75Mb L: 43/113 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:06:39.434 [2024-10-05 17:56:00.737674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2738188569834160127 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.434 [2024-10-05 17:56:00.737702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:39.434 #51 NEW cov: 12523 ft: 16246 corp: 36/1962b lim: 120 exec/s: 25 rss: 75Mb L: 36/113 MS: 1 ChangeBit- 00:06:39.434 #51 DONE cov: 12523 ft: 16246 corp: 36/1962b lim: 120 exec/s: 25 rss: 75Mb 00:06:39.434 ###### Recommended dictionary. ###### 00:06:39.434 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:39.434 "\000\000\000\000\000\000\000\013" # Uses: 0 00:06:39.434 "\376\377\377\365" # Uses: 2 00:06:39.434 "\001\000\000\000" # Uses: 0 00:06:39.434 "\000\037" # Uses: 0 00:06:39.434 ###### End of recommended dictionary. ###### 00:06:39.434 Done 51 runs in 2 second(s) 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.693 17:56:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:39.693 [2024-10-05 17:56:00.947857] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:39.693 [2024-10-05 17:56:00.947927] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485373 ] 00:06:39.693 [2024-10-05 17:56:01.130251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.950 [2024-10-05 17:56:01.200731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.950 [2024-10-05 17:56:01.259362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.950 [2024-10-05 17:56:01.275685] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:39.950 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.950 INFO: Seed: 1152122950 00:06:39.950 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:39.950 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:39.950 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:39.950 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.950 #2 INITED exec/s: 0 rss: 66Mb 00:06:39.950 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.950 This may also happen if the target rejected all inputs we tried so far 00:06:39.950 [2024-10-05 17:56:01.320853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:39.950 [2024-10-05 17:56:01.320883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.207 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:40.207 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.207 #7 NEW cov: 12238 ft: 12216 corp: 2/26b lim: 100 exec/s: 0 rss: 73Mb L: 25/25 MS: 5 ShuffleBytes-CrossOver-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:06:40.207 [2024-10-05 17:56:01.631789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.207 [2024-10-05 17:56:01.631830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 #8 NEW cov: 12352 ft: 12909 corp: 3/51b lim: 100 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:06:40.465 [2024-10-05 17:56:01.692059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.465 [2024-10-05 17:56:01.692088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.692129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.465 [2024-10-05 17:56:01.692143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.692203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:40.465 [2024-10-05 17:56:01.692218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.465 #9 NEW cov: 12358 ft: 13521 corp: 4/128b lim: 100 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:06:40.465 [2024-10-05 17:56:01.752009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.465 [2024-10-05 17:56:01.752038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 #10 NEW cov: 12443 ft: 13803 corp: 5/153b lim: 100 exec/s: 0 rss: 73Mb L: 25/77 MS: 1 ShuffleBytes- 00:06:40.465 [2024-10-05 17:56:01.792366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.465 [2024-10-05 17:56:01.792394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.792432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.465 [2024-10-05 17:56:01.792447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.792505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:40.465 [2024-10-05 17:56:01.792520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.465 #11 NEW cov: 12443 ft: 13946 corp: 6/230b lim: 100 exec/s: 0 rss: 74Mb L: 77/77 MS: 1 ShuffleBytes- 00:06:40.465 [2024-10-05 17:56:01.852243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.465 [2024-10-05 17:56:01.852271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 #12 NEW cov: 12443 ft: 14062 corp: 7/255b lim: 100 exec/s: 0 rss: 74Mb L: 25/77 MS: 1 ChangeBinInt- 00:06:40.465 [2024-10-05 17:56:01.892578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.465 [2024-10-05 17:56:01.892605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.892642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.465 [2024-10-05 17:56:01.892657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.465 [2024-10-05 17:56:01.892714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:40.465 [2024-10-05 17:56:01.892728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.723 #13 NEW cov: 12443 ft: 14132 corp: 8/332b lim: 100 exec/s: 0 rss: 74Mb L: 77/77 MS: 1 ChangeBit- 00:06:40.723 [2024-10-05 17:56:01.952545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.723 [2024-10-05 17:56:01.952573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.723 #14 NEW cov: 12443 ft: 14212 corp: 9/357b lim: 100 exec/s: 0 rss: 74Mb L: 25/77 MS: 1 ShuffleBytes- 00:06:40.723 [2024-10-05 17:56:01.992765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.723 [2024-10-05 17:56:01.992793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.723 [2024-10-05 17:56:01.992828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.723 [2024-10-05 17:56:01.992843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.723 #15 NEW cov: 12443 ft: 14494 corp: 10/410b lim: 100 exec/s: 0 rss: 74Mb L: 53/77 MS: 1 EraseBytes- 00:06:40.723 [2024-10-05 17:56:02.052812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.723 [2024-10-05 17:56:02.052840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.723 #16 NEW cov: 12443 ft: 14515 corp: 11/435b lim: 100 exec/s: 0 rss: 74Mb L: 25/77 MS: 1 CMP- DE: "\037\000"- 00:06:40.723 [2024-10-05 17:56:02.113011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.723 [2024-10-05 17:56:02.113038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.723 #17 NEW cov: 12443 ft: 14528 corp: 12/468b lim: 100 exec/s: 0 rss: 74Mb L: 33/77 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:06:40.723 [2024-10-05 17:56:02.173157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.723 [2024-10-05 17:56:02.173184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.980 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:40.980 #18 NEW cov: 12466 ft: 14544 corp: 13/493b lim: 100 exec/s: 0 rss: 74Mb L: 25/77 MS: 1 ChangeByte- 00:06:40.980 [2024-10-05 17:56:02.233334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.980 [2024-10-05 17:56:02.233363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.980 #19 NEW cov: 12466 ft: 14565 corp: 14/526b lim: 100 exec/s: 0 rss: 74Mb L: 33/77 MS: 1 ShuffleBytes- 00:06:40.980 [2024-10-05 17:56:02.293866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.980 [2024-10-05 17:56:02.293894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.980 [2024-10-05 17:56:02.293944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.980 [2024-10-05 17:56:02.293960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.980 [2024-10-05 17:56:02.294018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:40.981 [2024-10-05 17:56:02.294034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:40.981 [2024-10-05 17:56:02.294091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:40.981 [2024-10-05 17:56:02.294106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:40.981 #20 NEW cov: 12466 ft: 14872 corp: 15/610b lim: 100 exec/s: 20 rss: 74Mb L: 84/84 MS: 1 CrossOver- 00:06:40.981 [2024-10-05 17:56:02.333790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.981 [2024-10-05 17:56:02.333817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.981 [2024-10-05 17:56:02.333859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:40.981 [2024-10-05 17:56:02.333875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:40.981 #26 NEW cov: 12466 ft: 14882 corp: 16/662b lim: 100 exec/s: 26 rss: 74Mb L: 52/84 MS: 1 InsertRepeatedBytes- 00:06:40.981 [2024-10-05 17:56:02.373741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.981 [2024-10-05 17:56:02.373770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.981 #27 NEW cov: 12466 ft: 14887 corp: 17/691b lim: 100 exec/s: 27 rss: 74Mb L: 29/84 MS: 1 InsertRepeatedBytes- 00:06:40.981 [2024-10-05 17:56:02.413886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:40.981 [2024-10-05 17:56:02.413913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:40.981 #28 NEW cov: 12466 ft: 14891 corp: 18/716b lim: 100 exec/s: 28 rss: 74Mb L: 25/84 MS: 1 ChangeBinInt- 00:06:41.239 [2024-10-05 17:56:02.454196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.454223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 [2024-10-05 17:56:02.454277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.239 [2024-10-05 17:56:02.454292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.239 [2024-10-05 17:56:02.454349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:41.239 [2024-10-05 17:56:02.454364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.239 #34 NEW cov: 12466 ft: 14912 corp: 19/776b lim: 100 exec/s: 34 rss: 74Mb L: 60/84 MS: 1 CopyPart- 00:06:41.239 [2024-10-05 17:56:02.514161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.514192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 #38 NEW cov: 12466 ft: 14934 corp: 20/800b lim: 100 exec/s: 38 rss: 74Mb L: 24/84 MS: 4 EraseBytes-CopyPart-ChangeBinInt-CopyPart- 00:06:41.239 [2024-10-05 17:56:02.554294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.554322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 #43 NEW cov: 12466 ft: 14937 corp: 21/824b lim: 100 exec/s: 43 rss: 74Mb L: 24/84 MS: 5 EraseBytes-ChangeBit-CopyPart-CMP-CrossOver- DE: "\000\000"- 00:06:41.239 [2024-10-05 17:56:02.594397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.594425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 #44 NEW cov: 12466 ft: 14945 corp: 22/857b lim: 100 exec/s: 44 rss: 74Mb L: 33/84 MS: 1 ChangeByte- 00:06:41.239 [2024-10-05 17:56:02.634637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.634664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 [2024-10-05 17:56:02.634710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.239 [2024-10-05 17:56:02.634725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.239 #45 NEW cov: 12466 ft: 14973 corp: 23/898b lim: 100 exec/s: 45 rss: 75Mb L: 41/84 MS: 1 CopyPart- 00:06:41.239 [2024-10-05 17:56:02.694953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.239 [2024-10-05 17:56:02.694979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.239 [2024-10-05 17:56:02.695017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.239 [2024-10-05 17:56:02.695032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.239 [2024-10-05 17:56:02.695090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:41.239 [2024-10-05 17:56:02.695107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.497 #46 NEW cov: 12466 ft: 14990 corp: 24/975b lim: 100 exec/s: 46 rss: 75Mb L: 77/84 MS: 1 ChangeBit- 00:06:41.497 [2024-10-05 17:56:02.734782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.497 [2024-10-05 17:56:02.734809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.497 #47 NEW cov: 12466 ft: 15008 corp: 25/1000b lim: 100 exec/s: 47 rss: 75Mb L: 25/84 MS: 1 ChangeBinInt- 00:06:41.497 [2024-10-05 17:56:02.774903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.497 [2024-10-05 17:56:02.774930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.497 #48 NEW cov: 12466 ft: 15061 corp: 26/1025b lim: 100 exec/s: 48 rss: 75Mb L: 25/84 MS: 1 ChangeBit- 00:06:41.497 [2024-10-05 17:56:02.815022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.497 [2024-10-05 17:56:02.815050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.497 #49 NEW cov: 12466 ft: 15067 corp: 27/1058b lim: 100 exec/s: 49 rss: 75Mb L: 33/84 MS: 1 ChangeBinInt- 00:06:41.497 [2024-10-05 17:56:02.875469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.497 [2024-10-05 17:56:02.875496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.497 [2024-10-05 17:56:02.875532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.497 [2024-10-05 17:56:02.875547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.497 [2024-10-05 17:56:02.875605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:41.497 [2024-10-05 17:56:02.875621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.497 #50 NEW cov: 12466 ft: 15095 corp: 28/1131b lim: 100 exec/s: 50 rss: 75Mb L: 73/84 MS: 1 InsertRepeatedBytes- 00:06:41.497 [2024-10-05 17:56:02.915311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.497 [2024-10-05 17:56:02.915339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.497 #51 NEW cov: 12466 ft: 15121 corp: 29/1165b lim: 100 exec/s: 51 rss: 75Mb L: 34/84 MS: 1 InsertByte- 00:06:41.755 [2024-10-05 17:56:02.975672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:02.975701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.755 [2024-10-05 17:56:02.975740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.755 [2024-10-05 17:56:02.975756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.755 [2024-10-05 17:56:02.975813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:41.755 [2024-10-05 17:56:02.975829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:41.755 #52 NEW cov: 12466 ft: 15133 corp: 30/1238b lim: 100 exec/s: 52 rss: 75Mb L: 73/84 MS: 1 ShuffleBytes- 00:06:41.755 [2024-10-05 17:56:03.035637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:03.035665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.755 #53 NEW cov: 12466 ft: 15155 corp: 31/1263b lim: 100 exec/s: 53 rss: 75Mb L: 25/84 MS: 1 PersAutoDict- DE: "\037\000"- 00:06:41.755 [2024-10-05 17:56:03.075751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:03.075778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.755 #59 NEW cov: 12466 ft: 15169 corp: 32/1289b lim: 100 exec/s: 59 rss: 75Mb L: 26/84 MS: 1 InsertByte- 00:06:41.755 [2024-10-05 17:56:03.135942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:03.135969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.755 #60 NEW cov: 12466 ft: 15175 corp: 33/1319b lim: 100 exec/s: 60 rss: 75Mb L: 30/84 MS: 1 EraseBytes- 00:06:41.755 [2024-10-05 17:56:03.176176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:03.176208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:41.755 [2024-10-05 17:56:03.176264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:41.755 [2024-10-05 17:56:03.176279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:41.755 #61 NEW cov: 12466 ft: 15205 corp: 34/1373b lim: 100 exec/s: 61 rss: 75Mb L: 54/84 MS: 1 InsertRepeatedBytes- 00:06:41.755 [2024-10-05 17:56:03.216173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:41.755 [2024-10-05 17:56:03.216210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.014 #62 NEW cov: 12466 ft: 15328 corp: 35/1402b lim: 100 exec/s: 62 rss: 75Mb L: 29/84 MS: 1 ShuffleBytes- 00:06:42.014 [2024-10-05 17:56:03.276302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:42.014 [2024-10-05 17:56:03.276330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.014 #64 pulse cov: 12466 ft: 15332 corp: 35/1402b lim: 100 exec/s: 32 rss: 75Mb 00:06:42.014 #64 NEW cov: 12466 ft: 15332 corp: 36/1425b lim: 100 exec/s: 32 rss: 75Mb L: 23/84 MS: 2 EraseBytes-CMP- DE: "%-\201\346\350\361i\000"- 00:06:42.014 #64 DONE cov: 12466 ft: 15332 corp: 36/1425b lim: 100 exec/s: 32 rss: 75Mb 00:06:42.014 ###### Recommended dictionary. ###### 00:06:42.014 "\037\000" # Uses: 1 00:06:42.014 "\001\000\000\000\000\000\000\001" # Uses: 1 00:06:42.014 "\000\000" # Uses: 0 00:06:42.014 "%-\201\346\350\361i\000" # Uses: 0 00:06:42.014 ###### End of recommended dictionary. ###### 00:06:42.014 Done 64 runs in 2 second(s) 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.014 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.015 17:56:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:42.273 [2024-10-05 17:56:03.486813] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:42.273 [2024-10-05 17:56:03.486883] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485757 ] 00:06:42.273 [2024-10-05 17:56:03.667324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.273 [2024-10-05 17:56:03.733557] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.529 [2024-10-05 17:56:03.792700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.529 [2024-10-05 17:56:03.809071] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:42.529 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.529 INFO: Seed: 3685128717 00:06:42.529 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:42.529 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:42.529 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:42.529 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.529 #2 INITED exec/s: 0 rss: 65Mb 00:06:42.529 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.529 This may also happen if the target rejected all inputs we tried so far 00:06:42.529 [2024-10-05 17:56:03.864229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:42.529 [2024-10-05 17:56:03.864260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.787 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:42.787 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.787 #6 NEW cov: 12217 ft: 12196 corp: 2/15b lim: 50 exec/s: 0 rss: 73Mb L: 14/14 MS: 4 ChangeByte-ChangeBit-InsertRepeatedBytes-CopyPart- 00:06:42.787 [2024-10-05 17:56:04.175107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:42.787 [2024-10-05 17:56:04.175140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.787 #7 NEW cov: 12330 ft: 12712 corp: 3/29b lim: 50 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ShuffleBytes- 00:06:42.787 [2024-10-05 17:56:04.235322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:42.787 [2024-10-05 17:56:04.235352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.787 [2024-10-05 17:56:04.235389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:42.787 [2024-10-05 17:56:04.235405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.045 #8 NEW cov: 12336 ft: 13272 corp: 4/50b lim: 50 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 CrossOver- 00:06:43.045 [2024-10-05 17:56:04.295363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:43.045 [2024-10-05 17:56:04.295391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.045 #10 NEW cov: 12421 ft: 13569 corp: 5/67b lim: 50 exec/s: 0 rss: 73Mb L: 17/21 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:43.045 [2024-10-05 17:56:04.335695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:256 00:06:43.045 [2024-10-05 17:56:04.335724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.045 [2024-10-05 17:56:04.335763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446462603027808255 len:1 00:06:43.045 [2024-10-05 17:56:04.335779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.045 [2024-10-05 17:56:04.335836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:72057589759737855 len:65281 00:06:43.045 [2024-10-05 17:56:04.335853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.045 #16 NEW cov: 12421 ft: 14068 corp: 6/101b lim: 50 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CrossOver- 00:06:43.045 [2024-10-05 17:56:04.395630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10561665232144601746 len:37523 00:06:43.045 [2024-10-05 17:56:04.395660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.045 #18 NEW cov: 12421 ft: 14110 corp: 7/117b lim: 50 exec/s: 0 rss: 73Mb L: 16/34 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:43.045 [2024-10-05 17:56:04.435736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:43.045 [2024-10-05 17:56:04.435763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.045 #19 NEW cov: 12421 ft: 14193 corp: 8/131b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 CopyPart- 00:06:43.045 [2024-10-05 17:56:04.475878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073704767487 len:65536 00:06:43.045 [2024-10-05 17:56:04.475908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.045 #20 NEW cov: 12421 ft: 14277 corp: 9/145b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 ChangeByte- 00:06:43.303 [2024-10-05 17:56:04.515988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:43.303 [2024-10-05 17:56:04.516017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.303 #21 NEW cov: 12421 ft: 14366 corp: 10/159b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 ChangeByte- 00:06:43.303 [2024-10-05 17:56:04.556090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073704767487 len:65536 00:06:43.303 [2024-10-05 17:56:04.556120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.303 #22 NEW cov: 12421 ft: 14402 corp: 11/173b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 ChangeASCIIInt- 00:06:43.303 [2024-10-05 17:56:04.616260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:37 len:1 00:06:43.304 [2024-10-05 17:56:04.616290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.304 #23 NEW cov: 12421 ft: 14418 corp: 12/191b lim: 50 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 InsertByte- 00:06:43.304 [2024-10-05 17:56:04.656665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.656695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.304 [2024-10-05 17:56:04.656739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.656754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.304 [2024-10-05 17:56:04.656811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.656827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.304 [2024-10-05 17:56:04.656884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:44 00:06:43.304 [2024-10-05 17:56:04.656898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.304 #32 NEW cov: 12421 ft: 14684 corp: 13/231b lim: 50 exec/s: 0 rss: 73Mb L: 40/40 MS: 4 CopyPart-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:06:43.304 [2024-10-05 17:56:04.696527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:43.304 [2024-10-05 17:56:04.696560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.304 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:43.304 #33 NEW cov: 12444 ft: 14747 corp: 14/246b lim: 50 exec/s: 0 rss: 74Mb L: 15/40 MS: 1 CopyPart- 00:06:43.304 [2024-10-05 17:56:04.756922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.756950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.304 [2024-10-05 17:56:04.756995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.757011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.304 [2024-10-05 17:56:04.757068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:06:43.304 [2024-10-05 17:56:04.757086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.562 #34 NEW cov: 12444 ft: 14760 corp: 15/285b lim: 50 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 EraseBytes- 00:06:43.562 [2024-10-05 17:56:04.817066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:37 len:65536 00:06:43.562 [2024-10-05 17:56:04.817095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.562 [2024-10-05 17:56:04.817132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:43.562 [2024-10-05 17:56:04.817146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.562 [2024-10-05 17:56:04.817208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446462603027808255 len:1 00:06:43.562 [2024-10-05 17:56:04.817242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.562 #35 NEW cov: 12444 ft: 14860 corp: 16/321b lim: 50 exec/s: 35 rss: 74Mb L: 36/40 MS: 1 CrossOver- 00:06:43.562 [2024-10-05 17:56:04.877114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073693298943 len:65536 00:06:43.562 [2024-10-05 17:56:04.877145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.562 [2024-10-05 17:56:04.877184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:43.562 [2024-10-05 17:56:04.877206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.562 #36 NEW cov: 12444 ft: 14869 corp: 17/344b lim: 50 exec/s: 36 rss: 74Mb L: 23/40 MS: 1 CMP- DE: "\010\000"- 00:06:43.562 [2024-10-05 17:56:04.937197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073692774655 len:65536 00:06:43.562 [2024-10-05 17:56:04.937226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.562 #37 NEW cov: 12444 ft: 14887 corp: 18/358b lim: 50 exec/s: 37 rss: 74Mb L: 14/40 MS: 1 CrossOver- 00:06:43.562 [2024-10-05 17:56:04.977232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1080863910552142079 len:65536 00:06:43.562 [2024-10-05 17:56:04.977261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.562 #38 NEW cov: 12444 ft: 14935 corp: 19/372b lim: 50 exec/s: 38 rss: 74Mb L: 14/40 MS: 1 ChangeBinInt- 00:06:43.820 [2024-10-05 17:56:05.037437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65024 00:06:43.820 [2024-10-05 17:56:05.037471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.820 #39 NEW cov: 12444 ft: 14952 corp: 20/386b lim: 50 exec/s: 39 rss: 74Mb L: 14/40 MS: 1 ChangeBinInt- 00:06:43.820 [2024-10-05 17:56:05.077563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:576742227280134143 len:65536 00:06:43.820 [2024-10-05 17:56:05.077592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.820 #40 NEW cov: 12444 ft: 14968 corp: 21/400b lim: 50 exec/s: 40 rss: 74Mb L: 14/40 MS: 1 PersAutoDict- DE: "\010\000"- 00:06:43.820 [2024-10-05 17:56:05.117651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1080863910552142079 len:65289 00:06:43.820 [2024-10-05 17:56:05.117681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.820 #41 NEW cov: 12444 ft: 14982 corp: 22/414b lim: 50 exec/s: 41 rss: 74Mb L: 14/40 MS: 1 PersAutoDict- DE: "\010\000"- 00:06:43.820 [2024-10-05 17:56:05.177961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073693298943 len:65536 00:06:43.820 [2024-10-05 17:56:05.177990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.820 [2024-10-05 17:56:05.178043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:43.820 [2024-10-05 17:56:05.178059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.820 #42 NEW cov: 12444 ft: 14993 corp: 23/437b lim: 50 exec/s: 42 rss: 74Mb L: 23/40 MS: 1 ChangeASCIIInt- 00:06:43.820 [2024-10-05 17:56:05.238017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:43.820 [2024-10-05 17:56:05.238047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.820 #43 NEW cov: 12444 ft: 15001 corp: 24/448b lim: 50 exec/s: 43 rss: 74Mb L: 11/40 MS: 1 EraseBytes- 00:06:44.078 [2024-10-05 17:56:05.298308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:576742227280134143 len:65536 00:06:44.078 [2024-10-05 17:56:05.298338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.078 [2024-10-05 17:56:05.298391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65337 00:06:44.078 [2024-10-05 17:56:05.298407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.078 #44 NEW cov: 12444 ft: 15030 corp: 25/468b lim: 50 exec/s: 44 rss: 74Mb L: 20/40 MS: 1 CopyPart- 00:06:44.078 [2024-10-05 17:56:05.358372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10561665232144601746 len:37523 00:06:44.078 [2024-10-05 17:56:05.358402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.078 #45 NEW cov: 12444 ft: 15046 corp: 26/485b lim: 50 exec/s: 45 rss: 75Mb L: 17/40 MS: 1 InsertByte- 00:06:44.078 [2024-10-05 17:56:05.418515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65024 00:06:44.078 [2024-10-05 17:56:05.418544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.078 #46 NEW cov: 12444 ft: 15048 corp: 27/499b lim: 50 exec/s: 46 rss: 75Mb L: 14/40 MS: 1 ChangeASCIIInt- 00:06:44.078 [2024-10-05 17:56:05.478702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10561665232144601746 len:37621 00:06:44.078 [2024-10-05 17:56:05.478731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.078 #47 NEW cov: 12444 ft: 15061 corp: 28/515b lim: 50 exec/s: 47 rss: 75Mb L: 16/40 MS: 1 ChangeByte- 00:06:44.078 [2024-10-05 17:56:05.518786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1080863910552142079 len:65289 00:06:44.078 [2024-10-05 17:56:05.518814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.336 #48 NEW cov: 12444 ft: 15079 corp: 29/532b lim: 50 exec/s: 48 rss: 75Mb L: 17/40 MS: 1 CrossOver- 00:06:44.336 [2024-10-05 17:56:05.579111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:648518346341351423 len:65536 00:06:44.336 [2024-10-05 17:56:05.579140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.336 [2024-10-05 17:56:05.579195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65337 00:06:44.336 [2024-10-05 17:56:05.579212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.336 #49 NEW cov: 12444 ft: 15081 corp: 30/552b lim: 50 exec/s: 49 rss: 75Mb L: 20/40 MS: 1 ShuffleBytes- 00:06:44.336 [2024-10-05 17:56:05.639164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:44.336 [2024-10-05 17:56:05.639200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.336 #50 NEW cov: 12444 ft: 15096 corp: 31/566b lim: 50 exec/s: 50 rss: 75Mb L: 14/40 MS: 1 ShuffleBytes- 00:06:44.336 [2024-10-05 17:56:05.699424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281470681743397 len:65536 00:06:44.336 [2024-10-05 17:56:05.699452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.336 [2024-10-05 17:56:05.699490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65281 00:06:44.336 [2024-10-05 17:56:05.699508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.336 #51 NEW cov: 12444 ft: 15130 corp: 32/595b lim: 50 exec/s: 51 rss: 75Mb L: 29/40 MS: 1 EraseBytes- 00:06:44.336 [2024-10-05 17:56:05.759515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:648237970876268543 len:65536 00:06:44.336 [2024-10-05 17:56:05.759544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.336 #52 NEW cov: 12444 ft: 15139 corp: 33/609b lim: 50 exec/s: 52 rss: 75Mb L: 14/40 MS: 1 ShuffleBytes- 00:06:44.594 [2024-10-05 17:56:05.799626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:576742227280134143 len:65337 00:06:44.594 [2024-10-05 17:56:05.799657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.594 #53 NEW cov: 12444 ft: 15158 corp: 34/619b lim: 50 exec/s: 53 rss: 75Mb L: 10/40 MS: 1 EraseBytes- 00:06:44.594 [2024-10-05 17:56:05.839689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18378908604305506438 len:65536 00:06:44.594 [2024-10-05 17:56:05.839718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.594 #54 NEW cov: 12444 ft: 15189 corp: 35/634b lim: 50 exec/s: 27 rss: 75Mb L: 15/40 MS: 1 InsertByte- 00:06:44.594 #54 DONE cov: 12444 ft: 15189 corp: 35/634b lim: 50 exec/s: 27 rss: 75Mb 00:06:44.594 ###### Recommended dictionary. ###### 00:06:44.594 "\010\000" # Uses: 2 00:06:44.594 ###### End of recommended dictionary. ###### 00:06:44.594 Done 54 runs in 2 second(s) 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:44.594 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:44.595 17:56:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.595 17:56:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.595 17:56:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.595 17:56:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:44.595 [2024-10-05 17:56:06.033097] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:44.595 [2024-10-05 17:56:06.033176] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486191 ] 00:06:44.853 [2024-10-05 17:56:06.208828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.853 [2024-10-05 17:56:06.273397] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.110 [2024-10-05 17:56:06.332070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.110 [2024-10-05 17:56:06.348437] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:45.110 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.110 INFO: Seed: 1929159234 00:06:45.110 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:45.110 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:45.110 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:45.110 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.110 #2 INITED exec/s: 0 rss: 65Mb 00:06:45.110 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.110 This may also happen if the target rejected all inputs we tried so far 00:06:45.110 [2024-10-05 17:56:06.415031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.110 [2024-10-05 17:56:06.415075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.110 [2024-10-05 17:56:06.415222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.110 [2024-10-05 17:56:06.415245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.110 [2024-10-05 17:56:06.415366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.110 [2024-10-05 17:56:06.415386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.110 [2024-10-05 17:56:06.415516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.110 [2024-10-05 17:56:06.415537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.368 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:45.368 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.368 #3 NEW cov: 12259 ft: 12272 corp: 2/83b lim: 90 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:06:45.368 [2024-10-05 17:56:06.766156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.368 [2024-10-05 17:56:06.766227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.368 [2024-10-05 17:56:06.766371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.368 [2024-10-05 17:56:06.766405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.368 [2024-10-05 17:56:06.766542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.368 [2024-10-05 17:56:06.766573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.368 [2024-10-05 17:56:06.766719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.368 [2024-10-05 17:56:06.766750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.368 #4 NEW cov: 12388 ft: 12916 corp: 3/166b lim: 90 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertByte- 00:06:45.626 [2024-10-05 17:56:06.845692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.626 [2024-10-05 17:56:06.845732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.845852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.626 [2024-10-05 17:56:06.845872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.626 #8 NEW cov: 12394 ft: 13553 corp: 4/211b lim: 90 exec/s: 0 rss: 73Mb L: 45/83 MS: 4 CrossOver-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:06:45.626 [2024-10-05 17:56:06.896284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.626 [2024-10-05 17:56:06.896319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.896406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.626 [2024-10-05 17:56:06.896431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.896552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.626 [2024-10-05 17:56:06.896579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.896702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.626 [2024-10-05 17:56:06.896728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.626 #9 NEW cov: 12479 ft: 13754 corp: 5/294b lim: 90 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertByte- 00:06:45.626 [2024-10-05 17:56:06.945894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.626 [2024-10-05 17:56:06.945929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.946051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.626 [2024-10-05 17:56:06.946074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.626 #11 NEW cov: 12479 ft: 13862 corp: 6/332b lim: 90 exec/s: 0 rss: 73Mb L: 38/83 MS: 2 InsertByte-CrossOver- 00:06:45.626 [2024-10-05 17:56:06.996552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.626 [2024-10-05 17:56:06.996582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.996651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.626 [2024-10-05 17:56:06.996672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.996800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.626 [2024-10-05 17:56:06.996825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:06.996952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.626 [2024-10-05 17:56:06.996973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.626 #12 NEW cov: 12479 ft: 13950 corp: 7/414b lim: 90 exec/s: 0 rss: 73Mb L: 82/83 MS: 1 ShuffleBytes- 00:06:45.626 [2024-10-05 17:56:07.046170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.626 [2024-10-05 17:56:07.046207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.626 [2024-10-05 17:56:07.046330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.626 [2024-10-05 17:56:07.046348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.626 #16 NEW cov: 12479 ft: 14064 corp: 8/456b lim: 90 exec/s: 0 rss: 73Mb L: 42/83 MS: 4 CrossOver-CopyPart-CrossOver-InsertRepeatedBytes- 00:06:45.885 [2024-10-05 17:56:07.096500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.885 [2024-10-05 17:56:07.096532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.096660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.885 [2024-10-05 17:56:07.096683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.885 #17 NEW cov: 12479 ft: 14132 corp: 9/494b lim: 90 exec/s: 0 rss: 73Mb L: 38/83 MS: 1 ChangeByte- 00:06:45.885 [2024-10-05 17:56:07.166613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.885 [2024-10-05 17:56:07.166646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.166766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.885 [2024-10-05 17:56:07.166790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.885 #18 NEW cov: 12479 ft: 14156 corp: 10/532b lim: 90 exec/s: 0 rss: 73Mb L: 38/83 MS: 1 CopyPart- 00:06:45.885 [2024-10-05 17:56:07.237385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.885 [2024-10-05 17:56:07.237417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.237490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.885 [2024-10-05 17:56:07.237511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.237640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.885 [2024-10-05 17:56:07.237663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.237783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.885 [2024-10-05 17:56:07.237804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.885 #19 NEW cov: 12479 ft: 14254 corp: 11/618b lim: 90 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 CMP- DE: "\020\000\000\000"- 00:06:45.885 [2024-10-05 17:56:07.287526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:45.885 [2024-10-05 17:56:07.287558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.287626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:45.885 [2024-10-05 17:56:07.287651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.287774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:45.885 [2024-10-05 17:56:07.287794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.885 [2024-10-05 17:56:07.287922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:45.885 [2024-10-05 17:56:07.287939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.885 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:45.885 #20 NEW cov: 12502 ft: 14306 corp: 12/700b lim: 90 exec/s: 0 rss: 74Mb L: 82/86 MS: 1 ChangeBinInt- 00:06:46.144 [2024-10-05 17:56:07.347778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.144 [2024-10-05 17:56:07.347810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.347908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.144 [2024-10-05 17:56:07.347930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.348052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.144 [2024-10-05 17:56:07.348075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.348193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.144 [2024-10-05 17:56:07.348215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.144 #21 NEW cov: 12502 ft: 14328 corp: 13/783b lim: 90 exec/s: 0 rss: 74Mb L: 83/86 MS: 1 PersAutoDict- DE: "\020\000\000\000"- 00:06:46.144 [2024-10-05 17:56:07.407341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.144 [2024-10-05 17:56:07.407370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.407502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.144 [2024-10-05 17:56:07.407522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.144 #22 NEW cov: 12502 ft: 14366 corp: 14/825b lim: 90 exec/s: 22 rss: 74Mb L: 42/86 MS: 1 PersAutoDict- DE: "\020\000\000\000"- 00:06:46.144 [2024-10-05 17:56:07.477593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.144 [2024-10-05 17:56:07.477625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.477748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.144 [2024-10-05 17:56:07.477771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.144 #23 NEW cov: 12502 ft: 14409 corp: 15/871b lim: 90 exec/s: 23 rss: 74Mb L: 46/86 MS: 1 PersAutoDict- DE: "\020\000\000\000"- 00:06:46.144 [2024-10-05 17:56:07.537675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.144 [2024-10-05 17:56:07.537708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.537822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.144 [2024-10-05 17:56:07.537843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.144 #24 NEW cov: 12502 ft: 14476 corp: 16/909b lim: 90 exec/s: 24 rss: 74Mb L: 38/86 MS: 1 ChangeByte- 00:06:46.144 [2024-10-05 17:56:07.588251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.144 [2024-10-05 17:56:07.588283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.588403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.144 [2024-10-05 17:56:07.588428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.588549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.144 [2024-10-05 17:56:07.588567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.144 [2024-10-05 17:56:07.588688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.144 [2024-10-05 17:56:07.588713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.403 #25 NEW cov: 12502 ft: 14486 corp: 17/992b lim: 90 exec/s: 25 rss: 74Mb L: 83/86 MS: 1 CopyPart- 00:06:46.403 [2024-10-05 17:56:07.638432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.403 [2024-10-05 17:56:07.638466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.638594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.403 [2024-10-05 17:56:07.638620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.638736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.403 [2024-10-05 17:56:07.638765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.638883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.403 [2024-10-05 17:56:07.638903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.403 #26 NEW cov: 12502 ft: 14510 corp: 18/1077b lim: 90 exec/s: 26 rss: 74Mb L: 85/86 MS: 1 CopyPart- 00:06:46.403 [2024-10-05 17:56:07.708602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.403 [2024-10-05 17:56:07.708634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.708733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.403 [2024-10-05 17:56:07.708755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.708871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.403 [2024-10-05 17:56:07.708894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.709016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.403 [2024-10-05 17:56:07.709034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.403 #27 NEW cov: 12502 ft: 14520 corp: 19/1160b lim: 90 exec/s: 27 rss: 74Mb L: 83/86 MS: 1 InsertByte- 00:06:46.403 [2024-10-05 17:56:07.758714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.403 [2024-10-05 17:56:07.758745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.758808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.403 [2024-10-05 17:56:07.758830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.758953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.403 [2024-10-05 17:56:07.758975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.759101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.403 [2024-10-05 17:56:07.759125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.403 #28 NEW cov: 12502 ft: 14545 corp: 20/1245b lim: 90 exec/s: 28 rss: 74Mb L: 85/86 MS: 1 CrossOver- 00:06:46.403 [2024-10-05 17:56:07.828539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.403 [2024-10-05 17:56:07.828565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.403 [2024-10-05 17:56:07.828691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.403 [2024-10-05 17:56:07.828714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 #29 NEW cov: 12502 ft: 14570 corp: 21/1290b lim: 90 exec/s: 29 rss: 74Mb L: 45/86 MS: 1 ChangeBinInt- 00:06:46.666 [2024-10-05 17:56:07.898766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.666 [2024-10-05 17:56:07.898793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:07.898942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.666 [2024-10-05 17:56:07.898969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 #30 NEW cov: 12502 ft: 14581 corp: 22/1340b lim: 90 exec/s: 30 rss: 74Mb L: 50/86 MS: 1 PersAutoDict- DE: "\020\000\000\000"- 00:06:46.666 [2024-10-05 17:56:07.969487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.666 [2024-10-05 17:56:07.969522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:07.969638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.666 [2024-10-05 17:56:07.969660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:07.969793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.666 [2024-10-05 17:56:07.969811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:07.969932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.666 [2024-10-05 17:56:07.969953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.666 #31 NEW cov: 12502 ft: 14624 corp: 23/1423b lim: 90 exec/s: 31 rss: 74Mb L: 83/86 MS: 1 ShuffleBytes- 00:06:46.666 [2024-10-05 17:56:08.019172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.666 [2024-10-05 17:56:08.019213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.019345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.666 [2024-10-05 17:56:08.019368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 #32 NEW cov: 12502 ft: 14634 corp: 24/1465b lim: 90 exec/s: 32 rss: 74Mb L: 42/86 MS: 1 ChangeByte- 00:06:46.666 [2024-10-05 17:56:08.069692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.666 [2024-10-05 17:56:08.069731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.069830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.666 [2024-10-05 17:56:08.069851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.069980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.666 [2024-10-05 17:56:08.070007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.070133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.666 [2024-10-05 17:56:08.070157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.666 #33 NEW cov: 12502 ft: 14731 corp: 25/1541b lim: 90 exec/s: 33 rss: 74Mb L: 76/86 MS: 1 EraseBytes- 00:06:46.666 [2024-10-05 17:56:08.119885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.666 [2024-10-05 17:56:08.119921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.120015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.666 [2024-10-05 17:56:08.120040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.120168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.666 [2024-10-05 17:56:08.120194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.666 [2024-10-05 17:56:08.120321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.666 [2024-10-05 17:56:08.120339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.988 #34 NEW cov: 12502 ft: 14738 corp: 26/1623b lim: 90 exec/s: 34 rss: 74Mb L: 82/86 MS: 1 ShuffleBytes- 00:06:46.988 [2024-10-05 17:56:08.170108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.988 [2024-10-05 17:56:08.170142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.170243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.988 [2024-10-05 17:56:08.170269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.170395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.988 [2024-10-05 17:56:08.170416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.170542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.988 [2024-10-05 17:56:08.170563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.988 #35 NEW cov: 12502 ft: 14760 corp: 27/1706b lim: 90 exec/s: 35 rss: 74Mb L: 83/86 MS: 1 CopyPart- 00:06:46.988 [2024-10-05 17:56:08.240333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.988 [2024-10-05 17:56:08.240364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.240429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.988 [2024-10-05 17:56:08.240452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.240575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.988 [2024-10-05 17:56:08.240600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.240734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:46.988 [2024-10-05 17:56:08.240757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.988 #36 NEW cov: 12502 ft: 14777 corp: 28/1789b lim: 90 exec/s: 36 rss: 74Mb L: 83/86 MS: 1 ShuffleBytes- 00:06:46.988 [2024-10-05 17:56:08.290227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.988 [2024-10-05 17:56:08.290262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.290374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.988 [2024-10-05 17:56:08.290399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.290519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:46.988 [2024-10-05 17:56:08.290538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.988 #37 NEW cov: 12502 ft: 15056 corp: 29/1846b lim: 90 exec/s: 37 rss: 75Mb L: 57/86 MS: 1 EraseBytes- 00:06:46.988 [2024-10-05 17:56:08.360152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:46.988 [2024-10-05 17:56:08.360182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.988 [2024-10-05 17:56:08.360317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:46.988 [2024-10-05 17:56:08.360339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.988 #38 NEW cov: 12502 ft: 15068 corp: 30/1896b lim: 90 exec/s: 19 rss: 75Mb L: 50/86 MS: 1 ChangeASCIIInt- 00:06:46.988 #38 DONE cov: 12502 ft: 15068 corp: 30/1896b lim: 90 exec/s: 19 rss: 75Mb 00:06:46.988 ###### Recommended dictionary. ###### 00:06:46.988 "\020\000\000\000" # Uses: 4 00:06:46.988 ###### End of recommended dictionary. ###### 00:06:46.988 Done 38 runs in 2 second(s) 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:47.247 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.248 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.248 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.248 17:56:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:47.248 [2024-10-05 17:56:08.573759] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:47.248 [2024-10-05 17:56:08.573828] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486729 ] 00:06:47.505 [2024-10-05 17:56:08.754177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.505 [2024-10-05 17:56:08.819673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.505 [2024-10-05 17:56:08.878424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.505 [2024-10-05 17:56:08.894720] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:47.505 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.505 INFO: Seed: 179168574 00:06:47.505 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:47.505 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:47.505 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:47.505 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.505 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.505 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.505 This may also happen if the target rejected all inputs we tried so far 00:06:47.505 [2024-10-05 17:56:08.943614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:47.506 [2024-10-05 17:56:08.943645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.020 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:48.020 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:48.020 #7 NEW cov: 12250 ft: 12229 corp: 2/13b lim: 50 exec/s: 0 rss: 73Mb L: 12/12 MS: 5 ChangeBit-ChangeBit-InsertByte-InsertByte-InsertRepeatedBytes- 00:06:48.020 [2024-10-05 17:56:09.274549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.020 [2024-10-05 17:56:09.274583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.020 #8 NEW cov: 12363 ft: 12895 corp: 3/25b lim: 50 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 ShuffleBytes- 00:06:48.020 [2024-10-05 17:56:09.334628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.021 [2024-10-05 17:56:09.334658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.021 #9 NEW cov: 12369 ft: 13080 corp: 4/37b lim: 50 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 CopyPart- 00:06:48.021 [2024-10-05 17:56:09.394779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.021 [2024-10-05 17:56:09.394808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.021 #10 NEW cov: 12454 ft: 13511 corp: 5/56b lim: 50 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CrossOver- 00:06:48.021 [2024-10-05 17:56:09.435044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.021 [2024-10-05 17:56:09.435072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.021 [2024-10-05 17:56:09.435109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.021 [2024-10-05 17:56:09.435126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.021 #11 NEW cov: 12454 ft: 14343 corp: 6/77b lim: 50 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 CopyPart- 00:06:48.021 [2024-10-05 17:56:09.475019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.021 [2024-10-05 17:56:09.475051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 #12 NEW cov: 12454 ft: 14469 corp: 7/89b lim: 50 exec/s: 0 rss: 73Mb L: 12/21 MS: 1 ShuffleBytes- 00:06:48.279 [2024-10-05 17:56:09.515104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.279 [2024-10-05 17:56:09.515134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 #13 NEW cov: 12454 ft: 14527 corp: 8/102b lim: 50 exec/s: 0 rss: 73Mb L: 13/21 MS: 1 InsertByte- 00:06:48.279 [2024-10-05 17:56:09.555269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.279 [2024-10-05 17:56:09.555298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 #14 NEW cov: 12454 ft: 14553 corp: 9/115b lim: 50 exec/s: 0 rss: 73Mb L: 13/21 MS: 1 InsertByte- 00:06:48.279 [2024-10-05 17:56:09.615609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.279 [2024-10-05 17:56:09.615637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 [2024-10-05 17:56:09.615675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.279 [2024-10-05 17:56:09.615691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.279 #15 NEW cov: 12454 ft: 14586 corp: 10/137b lim: 50 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 CrossOver- 00:06:48.279 [2024-10-05 17:56:09.675586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.279 [2024-10-05 17:56:09.675615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 #16 NEW cov: 12454 ft: 14649 corp: 11/149b lim: 50 exec/s: 0 rss: 73Mb L: 12/22 MS: 1 ChangeBit- 00:06:48.279 [2024-10-05 17:56:09.735909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.279 [2024-10-05 17:56:09.735938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.279 [2024-10-05 17:56:09.735977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.279 [2024-10-05 17:56:09.735993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.537 #19 NEW cov: 12454 ft: 14664 corp: 12/170b lim: 50 exec/s: 0 rss: 73Mb L: 21/22 MS: 3 EraseBytes-ChangeBit-CrossOver- 00:06:48.537 [2024-10-05 17:56:09.776175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.537 [2024-10-05 17:56:09.776208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.776260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.537 [2024-10-05 17:56:09.776277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.776336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:48.537 [2024-10-05 17:56:09.776352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.537 #20 NEW cov: 12454 ft: 14982 corp: 13/204b lim: 50 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:06:48.537 [2024-10-05 17:56:09.836354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.537 [2024-10-05 17:56:09.836382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.836427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.537 [2024-10-05 17:56:09.836444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.836502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:48.537 [2024-10-05 17:56:09.836518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.537 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:48.537 #21 NEW cov: 12477 ft: 15012 corp: 14/237b lim: 50 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:06:48.537 [2024-10-05 17:56:09.896399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.537 [2024-10-05 17:56:09.896427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.896481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.537 [2024-10-05 17:56:09.896498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.537 #22 NEW cov: 12477 ft: 15018 corp: 15/258b lim: 50 exec/s: 22 rss: 74Mb L: 21/34 MS: 1 ChangeBinInt- 00:06:48.537 [2024-10-05 17:56:09.957047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.537 [2024-10-05 17:56:09.957076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.957136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.537 [2024-10-05 17:56:09.957152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.957212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:48.537 [2024-10-05 17:56:09.957229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.957288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:48.537 [2024-10-05 17:56:09.957304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.537 [2024-10-05 17:56:09.957371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:48.537 [2024-10-05 17:56:09.957386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.795 #28 NEW cov: 12477 ft: 15433 corp: 16/308b lim: 50 exec/s: 28 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:48.795 [2024-10-05 17:56:10.016567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.016598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 #29 NEW cov: 12477 ft: 15476 corp: 17/321b lim: 50 exec/s: 29 rss: 74Mb L: 13/50 MS: 1 CopyPart- 00:06:48.795 [2024-10-05 17:56:10.057359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.057393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.057435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.795 [2024-10-05 17:56:10.057451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.057513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:48.795 [2024-10-05 17:56:10.057530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.057587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:48.795 [2024-10-05 17:56:10.057603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.057664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:48.795 [2024-10-05 17:56:10.057682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:48.795 #30 NEW cov: 12477 ft: 15567 corp: 18/371b lim: 50 exec/s: 30 rss: 74Mb L: 50/50 MS: 1 ChangeBinInt- 00:06:48.795 [2024-10-05 17:56:10.116980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.117009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.117046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.795 [2024-10-05 17:56:10.117062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.795 #31 NEW cov: 12477 ft: 15578 corp: 19/397b lim: 50 exec/s: 31 rss: 74Mb L: 26/50 MS: 1 InsertRepeatedBytes- 00:06:48.795 [2024-10-05 17:56:10.177020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.177048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 #32 NEW cov: 12477 ft: 15595 corp: 20/416b lim: 50 exec/s: 32 rss: 74Mb L: 19/50 MS: 1 ChangeByte- 00:06:48.795 [2024-10-05 17:56:10.217297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.217326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.217377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.795 [2024-10-05 17:56:10.217393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.795 #33 NEW cov: 12477 ft: 15608 corp: 21/441b lim: 50 exec/s: 33 rss: 74Mb L: 25/50 MS: 1 InsertRepeatedBytes- 00:06:48.795 [2024-10-05 17:56:10.257384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:48.795 [2024-10-05 17:56:10.257420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.795 [2024-10-05 17:56:10.257487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:48.795 [2024-10-05 17:56:10.257508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.054 #34 NEW cov: 12477 ft: 15640 corp: 22/462b lim: 50 exec/s: 34 rss: 74Mb L: 21/50 MS: 1 ChangeBit- 00:06:49.054 [2024-10-05 17:56:10.317378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.054 [2024-10-05 17:56:10.317407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.054 #35 NEW cov: 12477 ft: 15664 corp: 23/481b lim: 50 exec/s: 35 rss: 74Mb L: 19/50 MS: 1 EraseBytes- 00:06:49.054 [2024-10-05 17:56:10.357852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.054 [2024-10-05 17:56:10.357884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.357930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.054 [2024-10-05 17:56:10.357944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.358001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.054 [2024-10-05 17:56:10.358018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.054 #36 NEW cov: 12477 ft: 15702 corp: 24/512b lim: 50 exec/s: 36 rss: 75Mb L: 31/50 MS: 1 EraseBytes- 00:06:49.054 [2024-10-05 17:56:10.417986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.054 [2024-10-05 17:56:10.418014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.418051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.054 [2024-10-05 17:56:10.418067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.418124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.054 [2024-10-05 17:56:10.418140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.054 #37 NEW cov: 12477 ft: 15722 corp: 25/547b lim: 50 exec/s: 37 rss: 75Mb L: 35/50 MS: 1 CMP- DE: "\377\377\377\365"- 00:06:49.054 [2024-10-05 17:56:10.478312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.054 [2024-10-05 17:56:10.478340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.478393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.054 [2024-10-05 17:56:10.478409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.478468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.054 [2024-10-05 17:56:10.478486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.054 [2024-10-05 17:56:10.478543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:49.054 [2024-10-05 17:56:10.478559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.054 #38 NEW cov: 12477 ft: 15833 corp: 26/588b lim: 50 exec/s: 38 rss: 75Mb L: 41/50 MS: 1 InsertRepeatedBytes- 00:06:49.313 [2024-10-05 17:56:10.518714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.518749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.518812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.313 [2024-10-05 17:56:10.518831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.518893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.313 [2024-10-05 17:56:10.518908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.518964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:49.313 [2024-10-05 17:56:10.518982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.519040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:49.313 [2024-10-05 17:56:10.519057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.313 #44 NEW cov: 12477 ft: 15879 corp: 27/638b lim: 50 exec/s: 44 rss: 75Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:49.313 [2024-10-05 17:56:10.558263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.558291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.558330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.313 [2024-10-05 17:56:10.558346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.313 #45 NEW cov: 12477 ft: 15884 corp: 28/659b lim: 50 exec/s: 45 rss: 75Mb L: 21/50 MS: 1 PersAutoDict- DE: "\377\377\377\365"- 00:06:49.313 [2024-10-05 17:56:10.598410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.598439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.598477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.313 [2024-10-05 17:56:10.598491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.313 #46 NEW cov: 12477 ft: 15937 corp: 29/680b lim: 50 exec/s: 46 rss: 75Mb L: 21/50 MS: 1 ShuffleBytes- 00:06:49.313 [2024-10-05 17:56:10.638294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.638323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 #47 NEW cov: 12477 ft: 15949 corp: 30/692b lim: 50 exec/s: 47 rss: 75Mb L: 12/50 MS: 1 ChangeByte- 00:06:49.313 [2024-10-05 17:56:10.678428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.678458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 #48 NEW cov: 12477 ft: 15959 corp: 31/705b lim: 50 exec/s: 48 rss: 75Mb L: 13/50 MS: 1 CrossOver- 00:06:49.313 [2024-10-05 17:56:10.719026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.719054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.719095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.313 [2024-10-05 17:56:10.719112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.719172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.313 [2024-10-05 17:56:10.719193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.719254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:49.313 [2024-10-05 17:56:10.719270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.313 #49 NEW cov: 12477 ft: 15981 corp: 32/746b lim: 50 exec/s: 49 rss: 75Mb L: 41/50 MS: 1 CopyPart- 00:06:49.313 [2024-10-05 17:56:10.759068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.313 [2024-10-05 17:56:10.759095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.759133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.313 [2024-10-05 17:56:10.759148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.313 [2024-10-05 17:56:10.759205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.313 [2024-10-05 17:56:10.759222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.571 #50 NEW cov: 12477 ft: 15984 corp: 33/781b lim: 50 exec/s: 50 rss: 75Mb L: 35/50 MS: 1 ChangeByte- 00:06:49.571 [2024-10-05 17:56:10.818838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.571 [2024-10-05 17:56:10.818866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.571 #51 NEW cov: 12477 ft: 15996 corp: 34/793b lim: 50 exec/s: 51 rss: 75Mb L: 12/50 MS: 1 ShuffleBytes- 00:06:49.571 [2024-10-05 17:56:10.859488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.571 [2024-10-05 17:56:10.859517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.571 [2024-10-05 17:56:10.859571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:49.571 [2024-10-05 17:56:10.859588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.571 [2024-10-05 17:56:10.859642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:49.571 [2024-10-05 17:56:10.859659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.571 [2024-10-05 17:56:10.859715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:49.571 [2024-10-05 17:56:10.859733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.571 #52 NEW cov: 12477 ft: 16006 corp: 35/834b lim: 50 exec/s: 52 rss: 75Mb L: 41/50 MS: 1 ShuffleBytes- 00:06:49.572 [2024-10-05 17:56:10.919146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:49.572 [2024-10-05 17:56:10.919173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.572 #53 NEW cov: 12477 ft: 16025 corp: 36/848b lim: 50 exec/s: 26 rss: 75Mb L: 14/50 MS: 1 EraseBytes- 00:06:49.572 #53 DONE cov: 12477 ft: 16025 corp: 36/848b lim: 50 exec/s: 26 rss: 75Mb 00:06:49.572 ###### Recommended dictionary. ###### 00:06:49.572 "\377\377\377\365" # Uses: 1 00:06:49.572 ###### End of recommended dictionary. ###### 00:06:49.572 Done 53 runs in 2 second(s) 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.830 17:56:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:06:49.830 [2024-10-05 17:56:11.130566] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:49.830 [2024-10-05 17:56:11.130633] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487058 ] 00:06:50.087 [2024-10-05 17:56:11.311636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.087 [2024-10-05 17:56:11.378301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.087 [2024-10-05 17:56:11.437253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.087 [2024-10-05 17:56:11.453617] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:06:50.087 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.087 INFO: Seed: 2739195635 00:06:50.087 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:50.087 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:50.087 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:50.087 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.087 #2 INITED exec/s: 0 rss: 65Mb 00:06:50.087 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.087 This may also happen if the target rejected all inputs we tried so far 00:06:50.087 [2024-10-05 17:56:11.519485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.087 [2024-10-05 17:56:11.519527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.603 NEW_FUNC[1/715]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:06:50.603 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.603 #9 NEW cov: 12257 ft: 12231 corp: 2/34b lim: 85 exec/s: 0 rss: 73Mb L: 33/33 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:50.603 [2024-10-05 17:56:11.850610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.603 [2024-10-05 17:56:11.850648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.603 NEW_FUNC[1/1]: 0x1f8f2a8 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:957 00:06:50.603 #12 NEW cov: 12389 ft: 12960 corp: 3/58b lim: 85 exec/s: 0 rss: 73Mb L: 24/33 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:06:50.603 [2024-10-05 17:56:11.900559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.603 [2024-10-05 17:56:11.900593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.603 #13 NEW cov: 12395 ft: 13170 corp: 4/82b lim: 85 exec/s: 0 rss: 73Mb L: 24/33 MS: 1 ChangeByte- 00:06:50.603 [2024-10-05 17:56:11.971255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.603 [2024-10-05 17:56:11.971284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.603 [2024-10-05 17:56:11.971423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:50.603 [2024-10-05 17:56:11.971446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.603 #15 NEW cov: 12480 ft: 14205 corp: 5/117b lim: 85 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 InsertByte-CrossOver- 00:06:50.603 [2024-10-05 17:56:12.021106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.603 [2024-10-05 17:56:12.021139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.603 #16 NEW cov: 12480 ft: 14401 corp: 6/141b lim: 85 exec/s: 0 rss: 73Mb L: 24/35 MS: 1 ChangeByte- 00:06:50.861 [2024-10-05 17:56:12.091368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.861 [2024-10-05 17:56:12.091404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.861 #17 NEW cov: 12480 ft: 14484 corp: 7/166b lim: 85 exec/s: 0 rss: 73Mb L: 25/35 MS: 1 InsertByte- 00:06:50.861 [2024-10-05 17:56:12.141458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.861 [2024-10-05 17:56:12.141492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.861 #18 NEW cov: 12480 ft: 14523 corp: 8/191b lim: 85 exec/s: 0 rss: 73Mb L: 25/35 MS: 1 InsertByte- 00:06:50.861 [2024-10-05 17:56:12.211826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.861 [2024-10-05 17:56:12.211858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.861 #19 NEW cov: 12480 ft: 14561 corp: 9/216b lim: 85 exec/s: 0 rss: 73Mb L: 25/35 MS: 1 InsertByte- 00:06:50.861 [2024-10-05 17:56:12.262152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.861 [2024-10-05 17:56:12.262177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.861 #20 NEW cov: 12480 ft: 14587 corp: 10/240b lim: 85 exec/s: 0 rss: 73Mb L: 24/35 MS: 1 ChangeBit- 00:06:50.861 [2024-10-05 17:56:12.312214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:50.861 [2024-10-05 17:56:12.312244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.118 #21 NEW cov: 12480 ft: 14619 corp: 11/264b lim: 85 exec/s: 0 rss: 73Mb L: 24/35 MS: 1 ChangeByte- 00:06:51.118 [2024-10-05 17:56:12.362251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.118 [2024-10-05 17:56:12.362279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.118 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:51.118 #27 NEW cov: 12503 ft: 14626 corp: 12/288b lim: 85 exec/s: 0 rss: 74Mb L: 24/35 MS: 1 CrossOver- 00:06:51.118 [2024-10-05 17:56:12.412425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.118 [2024-10-05 17:56:12.412451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.118 #28 NEW cov: 12503 ft: 14634 corp: 13/321b lim: 85 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\003"- 00:06:51.118 [2024-10-05 17:56:12.482696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.118 [2024-10-05 17:56:12.482725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.118 #34 NEW cov: 12503 ft: 14672 corp: 14/346b lim: 85 exec/s: 34 rss: 74Mb L: 25/35 MS: 1 CopyPart- 00:06:51.118 [2024-10-05 17:56:12.552919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.118 [2024-10-05 17:56:12.552946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.375 #35 NEW cov: 12503 ft: 14695 corp: 15/370b lim: 85 exec/s: 35 rss: 74Mb L: 24/35 MS: 1 ChangeBinInt- 00:06:51.375 [2024-10-05 17:56:12.623264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.375 [2024-10-05 17:56:12.623292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.375 #36 NEW cov: 12503 ft: 14720 corp: 16/394b lim: 85 exec/s: 36 rss: 74Mb L: 24/35 MS: 1 ChangeByte- 00:06:51.375 [2024-10-05 17:56:12.673398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.376 [2024-10-05 17:56:12.673432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.376 #37 NEW cov: 12503 ft: 14738 corp: 17/416b lim: 85 exec/s: 37 rss: 74Mb L: 22/35 MS: 1 EraseBytes- 00:06:51.376 [2024-10-05 17:56:12.743568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.376 [2024-10-05 17:56:12.743594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.376 #38 NEW cov: 12503 ft: 14763 corp: 18/441b lim: 85 exec/s: 38 rss: 74Mb L: 25/35 MS: 1 InsertByte- 00:06:51.376 [2024-10-05 17:56:12.793779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.376 [2024-10-05 17:56:12.793811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.376 #39 NEW cov: 12503 ft: 14780 corp: 19/466b lim: 85 exec/s: 39 rss: 74Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:51.633 [2024-10-05 17:56:12.844015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.633 [2024-10-05 17:56:12.844040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.633 #40 NEW cov: 12503 ft: 14786 corp: 20/490b lim: 85 exec/s: 40 rss: 74Mb L: 24/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\003"- 00:06:51.633 [2024-10-05 17:56:12.894499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.633 [2024-10-05 17:56:12.894531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.633 [2024-10-05 17:56:12.894672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:51.633 [2024-10-05 17:56:12.894698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.633 #42 NEW cov: 12503 ft: 14851 corp: 21/539b lim: 85 exec/s: 42 rss: 74Mb L: 49/49 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:51.633 [2024-10-05 17:56:12.944279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.633 [2024-10-05 17:56:12.944311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.633 #43 NEW cov: 12503 ft: 14865 corp: 22/564b lim: 85 exec/s: 43 rss: 74Mb L: 25/49 MS: 1 ShuffleBytes- 00:06:51.633 [2024-10-05 17:56:13.014537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.633 [2024-10-05 17:56:13.014567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.633 #44 NEW cov: 12503 ft: 14927 corp: 23/586b lim: 85 exec/s: 44 rss: 74Mb L: 22/49 MS: 1 ChangeBit- 00:06:51.633 [2024-10-05 17:56:13.084900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.633 [2024-10-05 17:56:13.084925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.891 #45 NEW cov: 12503 ft: 14939 corp: 24/610b lim: 85 exec/s: 45 rss: 74Mb L: 24/49 MS: 1 ChangeBit- 00:06:51.891 [2024-10-05 17:56:13.135042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.891 [2024-10-05 17:56:13.135076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.891 #46 NEW cov: 12503 ft: 14954 corp: 25/632b lim: 85 exec/s: 46 rss: 74Mb L: 22/49 MS: 1 EraseBytes- 00:06:51.891 [2024-10-05 17:56:13.185562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.891 [2024-10-05 17:56:13.185598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.891 [2024-10-05 17:56:13.185718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:51.891 [2024-10-05 17:56:13.185740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.891 #47 NEW cov: 12503 ft: 14983 corp: 26/673b lim: 85 exec/s: 47 rss: 74Mb L: 41/49 MS: 1 CopyPart- 00:06:51.891 [2024-10-05 17:56:13.255623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.891 [2024-10-05 17:56:13.255652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.891 #48 NEW cov: 12503 ft: 14992 corp: 27/697b lim: 85 exec/s: 48 rss: 74Mb L: 24/49 MS: 1 ShuffleBytes- 00:06:51.891 [2024-10-05 17:56:13.326018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:51.891 [2024-10-05 17:56:13.326050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.891 [2024-10-05 17:56:13.326200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:51.891 [2024-10-05 17:56:13.326226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.149 #49 NEW cov: 12503 ft: 14998 corp: 28/731b lim: 85 exec/s: 49 rss: 75Mb L: 34/49 MS: 1 InsertByte- 00:06:52.149 [2024-10-05 17:56:13.395902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:52.149 [2024-10-05 17:56:13.395937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.149 #50 NEW cov: 12503 ft: 15046 corp: 29/748b lim: 85 exec/s: 50 rss: 75Mb L: 17/49 MS: 1 EraseBytes- 00:06:52.149 [2024-10-05 17:56:13.446074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:52.149 [2024-10-05 17:56:13.446099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.149 #51 NEW cov: 12503 ft: 15087 corp: 30/777b lim: 85 exec/s: 51 rss: 75Mb L: 29/49 MS: 1 InsertRepeatedBytes- 00:06:52.149 [2024-10-05 17:56:13.516367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:52.149 [2024-10-05 17:56:13.516400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.149 #57 NEW cov: 12503 ft: 15123 corp: 31/810b lim: 85 exec/s: 28 rss: 75Mb L: 33/49 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\003"- 00:06:52.149 #57 DONE cov: 12503 ft: 15123 corp: 31/810b lim: 85 exec/s: 28 rss: 75Mb 00:06:52.149 ###### Recommended dictionary. ###### 00:06:52.149 "\000\000\000\000\000\000\000\003" # Uses: 3 00:06:52.149 ###### End of recommended dictionary. ###### 00:06:52.149 Done 57 runs in 2 second(s) 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.407 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.408 17:56:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:06:52.408 [2024-10-05 17:56:13.707977] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:52.408 [2024-10-05 17:56:13.708042] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487552 ] 00:06:52.665 [2024-10-05 17:56:13.884651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.665 [2024-10-05 17:56:13.950191] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.665 [2024-10-05 17:56:14.008795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.665 [2024-10-05 17:56:14.025096] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:06:52.666 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.666 INFO: Seed: 1015224638 00:06:52.666 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:52.666 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:52.666 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:52.666 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.666 #2 INITED exec/s: 0 rss: 66Mb 00:06:52.666 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.666 This may also happen if the target rejected all inputs we tried so far 00:06:52.666 [2024-10-05 17:56:14.070291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:52.666 [2024-10-05 17:56:14.070322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.923 NEW_FUNC[1/713]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:06:52.923 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.923 #13 NEW cov: 12201 ft: 12196 corp: 2/10b lim: 25 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "p\000\000\000\000\000\000\000"- 00:06:53.181 [2024-10-05 17:56:14.401067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.181 [2024-10-05 17:56:14.401101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.181 NEW_FUNC[1/2]: 0xf67bc8 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:321 00:06:53.181 NEW_FUNC[2/2]: 0x1f3e418 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:820 00:06:53.181 #14 NEW cov: 12322 ft: 12623 corp: 3/19b lim: 25 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:53.181 [2024-10-05 17:56:14.461172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.181 [2024-10-05 17:56:14.461207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.181 #15 NEW cov: 12328 ft: 12831 corp: 4/28b lim: 25 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:06:53.181 [2024-10-05 17:56:14.521408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.181 [2024-10-05 17:56:14.521436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.181 [2024-10-05 17:56:14.521472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.181 [2024-10-05 17:56:14.521487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.181 #21 NEW cov: 12413 ft: 13559 corp: 5/40b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 CopyPart- 00:06:53.181 [2024-10-05 17:56:14.561585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.181 [2024-10-05 17:56:14.561614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.181 [2024-10-05 17:56:14.561651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.181 [2024-10-05 17:56:14.561667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.181 #22 NEW cov: 12413 ft: 13675 corp: 6/52b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 ChangeByte- 00:06:53.181 [2024-10-05 17:56:14.641811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.181 [2024-10-05 17:56:14.641843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.181 [2024-10-05 17:56:14.641909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.181 [2024-10-05 17:56:14.641930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.440 #23 NEW cov: 12413 ft: 13890 corp: 7/64b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 ChangeBit- 00:06:53.440 [2024-10-05 17:56:14.731891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.440 [2024-10-05 17:56:14.731925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.440 #27 NEW cov: 12413 ft: 13981 corp: 8/69b lim: 25 exec/s: 0 rss: 74Mb L: 5/12 MS: 4 InsertByte-EraseBytes-ChangeByte-CMP- DE: "\000\000\000\177"- 00:06:53.440 [2024-10-05 17:56:14.782281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.440 [2024-10-05 17:56:14.782360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.440 #28 NEW cov: 12413 ft: 14055 corp: 9/74b lim: 25 exec/s: 0 rss: 74Mb L: 5/12 MS: 1 ShuffleBytes- 00:06:53.440 [2024-10-05 17:56:14.862536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.440 [2024-10-05 17:56:14.862566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.440 [2024-10-05 17:56:14.862605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.440 [2024-10-05 17:56:14.862621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.440 [2024-10-05 17:56:14.862676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:53.440 [2024-10-05 17:56:14.862691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.440 #29 NEW cov: 12413 ft: 14401 corp: 10/90b lim: 25 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 PersAutoDict- DE: "\000\000\000\177"- 00:06:53.698 [2024-10-05 17:56:14.902513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:14.902542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:14.902592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.698 [2024-10-05 17:56:14.902608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.698 #30 NEW cov: 12413 ft: 14463 corp: 11/104b lim: 25 exec/s: 0 rss: 74Mb L: 14/16 MS: 1 CopyPart- 00:06:53.698 [2024-10-05 17:56:14.942510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:14.942539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:53.698 #31 NEW cov: 12436 ft: 14586 corp: 12/113b lim: 25 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 ChangeBinInt- 00:06:53.698 [2024-10-05 17:56:14.982999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:14.983027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:14.983074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.698 [2024-10-05 17:56:14.983090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:14.983145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:53.698 [2024-10-05 17:56:14.983177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:14.983238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:53.698 [2024-10-05 17:56:14.983257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.698 #32 NEW cov: 12436 ft: 15022 corp: 13/134b lim: 25 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 CrossOver- 00:06:53.698 [2024-10-05 17:56:15.043009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:15.043038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:15.043080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.698 [2024-10-05 17:56:15.043096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:15.043153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:53.698 [2024-10-05 17:56:15.043168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.698 #33 NEW cov: 12436 ft: 15071 corp: 14/150b lim: 25 exec/s: 33 rss: 74Mb L: 16/21 MS: 1 InsertRepeatedBytes- 00:06:53.698 [2024-10-05 17:56:15.082881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:15.082908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 #34 NEW cov: 12436 ft: 15122 corp: 15/158b lim: 25 exec/s: 34 rss: 74Mb L: 8/21 MS: 1 EraseBytes- 00:06:53.698 [2024-10-05 17:56:15.123124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.698 [2024-10-05 17:56:15.123152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.698 [2024-10-05 17:56:15.123203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.698 [2024-10-05 17:56:15.123219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.956 #35 NEW cov: 12436 ft: 15135 corp: 16/171b lim: 25 exec/s: 35 rss: 74Mb L: 13/21 MS: 1 InsertRepeatedBytes- 00:06:53.956 [2024-10-05 17:56:15.183296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.957 [2024-10-05 17:56:15.183325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.183378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.957 [2024-10-05 17:56:15.183395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.957 #36 NEW cov: 12436 ft: 15240 corp: 17/181b lim: 25 exec/s: 36 rss: 74Mb L: 10/21 MS: 1 EraseBytes- 00:06:53.957 [2024-10-05 17:56:15.243362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.957 [2024-10-05 17:56:15.243389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.957 #37 NEW cov: 12436 ft: 15262 corp: 18/187b lim: 25 exec/s: 37 rss: 74Mb L: 6/21 MS: 1 EraseBytes- 00:06:53.957 [2024-10-05 17:56:15.303847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.957 [2024-10-05 17:56:15.303875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.303926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.957 [2024-10-05 17:56:15.303942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.303998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:53.957 [2024-10-05 17:56:15.304016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.304072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:53.957 [2024-10-05 17:56:15.304088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.957 #38 NEW cov: 12436 ft: 15280 corp: 19/208b lim: 25 exec/s: 38 rss: 74Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:06:53.957 [2024-10-05 17:56:15.343764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.957 [2024-10-05 17:56:15.343794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.343839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.957 [2024-10-05 17:56:15.343856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.957 #39 NEW cov: 12436 ft: 15291 corp: 20/220b lim: 25 exec/s: 39 rss: 74Mb L: 12/21 MS: 1 PersAutoDict- DE: "p\000\000\000\000\000\000\000"- 00:06:53.957 [2024-10-05 17:56:15.384062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:53.957 [2024-10-05 17:56:15.384091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.384142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:53.957 [2024-10-05 17:56:15.384158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.384216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:53.957 [2024-10-05 17:56:15.384231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.957 [2024-10-05 17:56:15.384289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:53.957 [2024-10-05 17:56:15.384304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.214 #40 NEW cov: 12436 ft: 15302 corp: 21/242b lim: 25 exec/s: 40 rss: 74Mb L: 22/22 MS: 1 InsertByte- 00:06:54.214 [2024-10-05 17:56:15.444253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.214 [2024-10-05 17:56:15.444281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.214 [2024-10-05 17:56:15.444331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.214 [2024-10-05 17:56:15.444346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.214 [2024-10-05 17:56:15.444403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.214 [2024-10-05 17:56:15.444418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.214 [2024-10-05 17:56:15.444475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:54.214 [2024-10-05 17:56:15.444491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.214 #41 NEW cov: 12436 ft: 15309 corp: 22/263b lim: 25 exec/s: 41 rss: 74Mb L: 21/22 MS: 1 ChangeBinInt- 00:06:54.214 [2024-10-05 17:56:15.504052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.214 [2024-10-05 17:56:15.504081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.214 #43 NEW cov: 12436 ft: 15326 corp: 23/269b lim: 25 exec/s: 43 rss: 74Mb L: 6/22 MS: 2 InsertByte-PersAutoDict- DE: "\000\000\000\177"- 00:06:54.214 [2024-10-05 17:56:15.544205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.214 [2024-10-05 17:56:15.544234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.214 #44 NEW cov: 12436 ft: 15382 corp: 24/278b lim: 25 exec/s: 44 rss: 74Mb L: 9/22 MS: 1 ChangeBit- 00:06:54.214 [2024-10-05 17:56:15.584403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.214 [2024-10-05 17:56:15.584432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.214 [2024-10-05 17:56:15.584485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.214 [2024-10-05 17:56:15.584500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.214 #45 NEW cov: 12436 ft: 15393 corp: 25/289b lim: 25 exec/s: 45 rss: 74Mb L: 11/22 MS: 1 CrossOver- 00:06:54.214 [2024-10-05 17:56:15.644552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.214 [2024-10-05 17:56:15.644581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.214 [2024-10-05 17:56:15.644620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.214 [2024-10-05 17:56:15.644634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.214 #46 NEW cov: 12436 ft: 15400 corp: 26/301b lim: 25 exec/s: 46 rss: 74Mb L: 12/22 MS: 1 CopyPart- 00:06:54.471 [2024-10-05 17:56:15.684700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.684729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.684783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.684799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 #47 NEW cov: 12436 ft: 15440 corp: 27/315b lim: 25 exec/s: 47 rss: 75Mb L: 14/22 MS: 1 PersAutoDict- DE: "p\000\000\000\000\000\000\000"- 00:06:54.471 [2024-10-05 17:56:15.744845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.744874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.744912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.744926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 #48 NEW cov: 12436 ft: 15480 corp: 28/327b lim: 25 exec/s: 48 rss: 75Mb L: 12/22 MS: 1 CrossOver- 00:06:54.471 [2024-10-05 17:56:15.785205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.785234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.785291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.785305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.785359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.471 [2024-10-05 17:56:15.785379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.785435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:54.471 [2024-10-05 17:56:15.785449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.471 #49 NEW cov: 12436 ft: 15495 corp: 29/348b lim: 25 exec/s: 49 rss: 75Mb L: 21/22 MS: 1 ChangeByte- 00:06:54.471 [2024-10-05 17:56:15.825200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.825229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.825278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.825294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.825351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.471 [2024-10-05 17:56:15.825368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.471 #50 NEW cov: 12436 ft: 15526 corp: 30/364b lim: 25 exec/s: 50 rss: 75Mb L: 16/22 MS: 1 CrossOver- 00:06:54.471 [2024-10-05 17:56:15.865487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.865516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.865566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.865581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.865635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.471 [2024-10-05 17:56:15.865650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.865706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:54.471 [2024-10-05 17:56:15.865722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.471 #51 NEW cov: 12436 ft: 15531 corp: 31/388b lim: 25 exec/s: 51 rss: 75Mb L: 24/24 MS: 1 CopyPart- 00:06:54.471 [2024-10-05 17:56:15.905424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.471 [2024-10-05 17:56:15.905453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.905490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.471 [2024-10-05 17:56:15.905507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.471 [2024-10-05 17:56:15.905563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.471 [2024-10-05 17:56:15.905578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.730 #52 NEW cov: 12436 ft: 15542 corp: 32/407b lim: 25 exec/s: 52 rss: 75Mb L: 19/24 MS: 1 EraseBytes- 00:06:54.730 [2024-10-05 17:56:15.965667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.730 [2024-10-05 17:56:15.965697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:15.965738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.730 [2024-10-05 17:56:15.965754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:15.965809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.730 [2024-10-05 17:56:15.965824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.730 #53 NEW cov: 12436 ft: 15552 corp: 33/423b lim: 25 exec/s: 53 rss: 75Mb L: 16/24 MS: 1 ChangeBinInt- 00:06:54.730 [2024-10-05 17:56:16.005663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.730 [2024-10-05 17:56:16.005693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:16.005742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.730 [2024-10-05 17:56:16.005757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.730 #54 NEW cov: 12436 ft: 15573 corp: 34/433b lim: 25 exec/s: 54 rss: 75Mb L: 10/24 MS: 1 CopyPart- 00:06:54.730 [2024-10-05 17:56:16.066070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:54.730 [2024-10-05 17:56:16.066099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:16.066151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:54.730 [2024-10-05 17:56:16.066167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:16.066237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:54.730 [2024-10-05 17:56:16.066253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.730 [2024-10-05 17:56:16.066309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:06:54.730 [2024-10-05 17:56:16.066326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.730 #57 NEW cov: 12436 ft: 15583 corp: 35/454b lim: 25 exec/s: 28 rss: 75Mb L: 21/24 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:06:54.730 #57 DONE cov: 12436 ft: 15583 corp: 35/454b lim: 25 exec/s: 28 rss: 75Mb 00:06:54.730 ###### Recommended dictionary. ###### 00:06:54.730 "p\000\000\000\000\000\000\000" # Uses: 2 00:06:54.730 "\000\000\000\177" # Uses: 2 00:06:54.730 ###### End of recommended dictionary. ###### 00:06:54.730 Done 57 runs in 2 second(s) 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.989 17:56:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:06:54.989 [2024-10-05 17:56:16.278736] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:54.989 [2024-10-05 17:56:16.278808] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488083 ] 00:06:55.247 [2024-10-05 17:56:16.461623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.247 [2024-10-05 17:56:16.527394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.247 [2024-10-05 17:56:16.586008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.247 [2024-10-05 17:56:16.602409] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:06:55.247 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.247 INFO: Seed: 3593222914 00:06:55.247 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:06:55.247 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:06:55.247 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:06:55.247 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.247 #2 INITED exec/s: 0 rss: 65Mb 00:06:55.247 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.247 This may also happen if the target rejected all inputs we tried so far 00:06:55.247 [2024-10-05 17:56:16.657732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.247 [2024-10-05 17:56:16.657763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.247 [2024-10-05 17:56:16.657804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.247 [2024-10-05 17:56:16.657823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.504 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:06:55.504 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.504 #22 NEW cov: 12281 ft: 12259 corp: 2/58b lim: 100 exec/s: 0 rss: 73Mb L: 57/57 MS: 5 ChangeBit-InsertByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:06:55.762 [2024-10-05 17:56:16.989652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:16.989713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:16.989856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:16.989889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:16.990028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:16.990060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.763 #24 NEW cov: 12394 ft: 13276 corp: 3/118b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:55.763 [2024-10-05 17:56:17.039555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.039590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.039706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.039733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.039857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.039878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.763 #25 NEW cov: 12400 ft: 13610 corp: 4/178b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 ChangeBinInt- 00:06:55.763 [2024-10-05 17:56:17.099510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.099539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.099663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.099685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.763 #26 NEW cov: 12485 ft: 13936 corp: 5/235b lim: 100 exec/s: 0 rss: 73Mb L: 57/60 MS: 1 ChangeByte- 00:06:55.763 [2024-10-05 17:56:17.159882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.159914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.160050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.160069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.160194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.160218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.763 #27 NEW cov: 12485 ft: 14030 corp: 6/295b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 ChangeBit- 00:06:55.763 [2024-10-05 17:56:17.219853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.219884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.763 [2024-10-05 17:56:17.220012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.763 [2024-10-05 17:56:17.220037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.021 #28 NEW cov: 12485 ft: 14116 corp: 7/352b lim: 100 exec/s: 0 rss: 73Mb L: 57/60 MS: 1 ShuffleBytes- 00:06:56.021 [2024-10-05 17:56:17.270218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.270254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.270371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.270394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.270518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.270539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.021 #29 NEW cov: 12485 ft: 14191 corp: 8/412b lim: 100 exec/s: 0 rss: 73Mb L: 60/60 MS: 1 ChangeBit- 00:06:56.021 [2024-10-05 17:56:17.320410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.320444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.320538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.320558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.320688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.320709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.021 #30 NEW cov: 12485 ft: 14217 corp: 9/474b lim: 100 exec/s: 0 rss: 73Mb L: 62/62 MS: 1 CrossOver- 00:06:56.021 [2024-10-05 17:56:17.380545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.380580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.380700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.380724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.380852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.380876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.021 #31 NEW cov: 12485 ft: 14254 corp: 10/536b lim: 100 exec/s: 0 rss: 73Mb L: 62/62 MS: 1 ShuffleBytes- 00:06:56.021 [2024-10-05 17:56:17.450462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.450498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.021 [2024-10-05 17:56:17.450629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.021 [2024-10-05 17:56:17.450654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.021 #32 NEW cov: 12485 ft: 14286 corp: 11/593b lim: 100 exec/s: 0 rss: 73Mb L: 57/62 MS: 1 ChangeBit- 00:06:56.279 [2024-10-05 17:56:17.500622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.500650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.500779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.500798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.279 #33 NEW cov: 12485 ft: 14311 corp: 12/650b lim: 100 exec/s: 0 rss: 73Mb L: 57/62 MS: 1 ChangeByte- 00:06:56.279 [2024-10-05 17:56:17.551298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.551332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.551394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5280832615950076233 len:18762 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.551418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.551544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1229520896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.551565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.551694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.551716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.279 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:56.279 #34 NEW cov: 12508 ft: 14684 corp: 13/730b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:06:56.279 [2024-10-05 17:56:17.601211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.601244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.601367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.601394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.279 [2024-10-05 17:56:17.601523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.601545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.279 #35 NEW cov: 12508 ft: 14756 corp: 14/790b lim: 100 exec/s: 35 rss: 74Mb L: 60/80 MS: 1 ChangeBit- 00:06:56.279 [2024-10-05 17:56:17.671469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.279 [2024-10-05 17:56:17.671504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.280 [2024-10-05 17:56:17.671625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.280 [2024-10-05 17:56:17.671649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.280 [2024-10-05 17:56:17.671780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.280 [2024-10-05 17:56:17.671804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.280 #36 NEW cov: 12508 ft: 14780 corp: 15/862b lim: 100 exec/s: 36 rss: 74Mb L: 72/80 MS: 1 CopyPart- 00:06:56.280 [2024-10-05 17:56:17.721233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.280 [2024-10-05 17:56:17.721269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.280 [2024-10-05 17:56:17.721392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.280 [2024-10-05 17:56:17.721414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 #37 NEW cov: 12508 ft: 14805 corp: 16/919b lim: 100 exec/s: 37 rss: 74Mb L: 57/80 MS: 1 ChangeASCIIInt- 00:06:56.538 [2024-10-05 17:56:17.781705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.781737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.781820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.781841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.781969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.781992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.538 #38 NEW cov: 12508 ft: 14859 corp: 17/979b lim: 100 exec/s: 38 rss: 74Mb L: 60/80 MS: 1 ChangeByte- 00:06:56.538 [2024-10-05 17:56:17.821758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.821792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.821920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.821941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.822060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.822092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.538 #39 NEW cov: 12508 ft: 14876 corp: 18/1052b lim: 100 exec/s: 39 rss: 74Mb L: 73/80 MS: 1 InsertRepeatedBytes- 00:06:56.538 [2024-10-05 17:56:17.861686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.861718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.861842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.861866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 #40 NEW cov: 12508 ft: 14911 corp: 19/1109b lim: 100 exec/s: 40 rss: 74Mb L: 57/80 MS: 1 ChangeBit- 00:06:56.538 [2024-10-05 17:56:17.902259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.902288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.902356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3746994890848089140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.902379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.902500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.902524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.902655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.902678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.538 #41 NEW cov: 12508 ft: 14922 corp: 20/1199b lim: 100 exec/s: 41 rss: 74Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:06:56.538 [2024-10-05 17:56:17.962447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:16449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.962479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.962569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4629771061636907072 len:16449 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.962595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.962724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.962742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.538 [2024-10-05 17:56:17.962864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.538 [2024-10-05 17:56:17.962886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.538 #42 NEW cov: 12508 ft: 14932 corp: 21/1287b lim: 100 exec/s: 42 rss: 74Mb L: 88/90 MS: 1 InsertRepeatedBytes- 00:06:56.796 [2024-10-05 17:56:18.002350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.002383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.002465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.002490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.002619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.002640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.796 #43 NEW cov: 12508 ft: 14945 corp: 22/1349b lim: 100 exec/s: 43 rss: 74Mb L: 62/90 MS: 1 ChangeByte- 00:06:56.796 [2024-10-05 17:56:18.062443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4467570830519304192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.062475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.062567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.062593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.062721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.062744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.796 #44 NEW cov: 12508 ft: 14956 corp: 23/1411b lim: 100 exec/s: 44 rss: 74Mb L: 62/90 MS: 1 ChangeBinInt- 00:06:56.796 [2024-10-05 17:56:18.102550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.102583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.102706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.102723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.102850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.102871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.796 #45 NEW cov: 12508 ft: 14996 corp: 24/1478b lim: 100 exec/s: 45 rss: 74Mb L: 67/90 MS: 1 EraseBytes- 00:06:56.796 [2024-10-05 17:56:18.162742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.162774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.162890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.162910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.163031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.163056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.796 #46 NEW cov: 12508 ft: 15000 corp: 25/1550b lim: 100 exec/s: 46 rss: 74Mb L: 72/90 MS: 1 ChangeBinInt- 00:06:56.796 [2024-10-05 17:56:18.202782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.202819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.202948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.202970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.796 [2024-10-05 17:56:18.203082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-10-05 17:56:18.203102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.796 #47 NEW cov: 12508 ft: 15023 corp: 26/1610b lim: 100 exec/s: 47 rss: 74Mb L: 60/90 MS: 1 ShuffleBytes- 00:06:57.058 [2024-10-05 17:56:18.262945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:171193396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.058 [2024-10-05 17:56:18.262981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.263103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.263127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.263249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.263269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.059 #48 NEW cov: 12508 ft: 15037 corp: 27/1673b lim: 100 exec/s: 48 rss: 74Mb L: 63/90 MS: 1 CrossOver- 00:06:57.059 [2024-10-05 17:56:18.302842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.302872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.303014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.303038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.059 #54 NEW cov: 12508 ft: 15090 corp: 28/1723b lim: 100 exec/s: 54 rss: 74Mb L: 50/90 MS: 1 EraseBytes- 00:06:57.059 [2024-10-05 17:56:18.342910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.342937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.343055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.343075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.059 #55 NEW cov: 12508 ft: 15103 corp: 29/1773b lim: 100 exec/s: 55 rss: 74Mb L: 50/90 MS: 1 ChangeByte- 00:06:57.059 [2024-10-05 17:56:18.403044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.403075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.403204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.403238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.059 #56 NEW cov: 12508 ft: 15106 corp: 30/1830b lim: 100 exec/s: 56 rss: 74Mb L: 57/90 MS: 1 ShuffleBytes- 00:06:57.059 [2024-10-05 17:56:18.463465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.463499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.463607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.463631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.463757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.463783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.059 #57 NEW cov: 12508 ft: 15122 corp: 31/1890b lim: 100 exec/s: 57 rss: 74Mb L: 60/90 MS: 1 ChangeBit- 00:06:57.059 [2024-10-05 17:56:18.503418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.503449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.059 [2024-10-05 17:56:18.503593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.059 [2024-10-05 17:56:18.503616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.322 #58 NEW cov: 12508 ft: 15162 corp: 32/1940b lim: 100 exec/s: 58 rss: 74Mb L: 50/90 MS: 1 CopyPart- 00:06:57.322 [2024-10-05 17:56:18.543719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.543750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.322 [2024-10-05 17:56:18.543841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.543866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.322 [2024-10-05 17:56:18.543988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.544012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.322 #59 NEW cov: 12508 ft: 15186 corp: 33/2013b lim: 100 exec/s: 59 rss: 75Mb L: 73/90 MS: 1 ChangeASCIIInt- 00:06:57.322 [2024-10-05 17:56:18.613930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.613965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.322 [2024-10-05 17:56:18.614042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.614061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.322 [2024-10-05 17:56:18.614178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.322 [2024-10-05 17:56:18.614208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.322 #60 NEW cov: 12508 ft: 15222 corp: 34/2085b lim: 100 exec/s: 30 rss: 75Mb L: 72/90 MS: 1 ChangeBinInt- 00:06:57.322 #60 DONE cov: 12508 ft: 15222 corp: 34/2085b lim: 100 exec/s: 30 rss: 75Mb 00:06:57.322 Done 60 runs in 2 second(s) 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:06:57.322 00:06:57.322 real 1m4.461s 00:06:57.322 user 1m40.846s 00:06:57.322 sys 0m7.313s 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.322 17:56:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:57.322 ************************************ 00:06:57.322 END TEST nvmf_llvm_fuzz 00:06:57.322 ************************************ 00:06:57.580 17:56:18 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:57.580 17:56:18 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:57.580 17:56:18 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:57.580 17:56:18 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.580 17:56:18 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.580 17:56:18 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:57.580 ************************************ 00:06:57.580 START TEST vfio_llvm_fuzz 00:06:57.580 ************************************ 00:06:57.580 17:56:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:06:57.580 * Looking for test storage... 00:06:57.580 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.580 17:56:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.580 17:56:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.580 17:56:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.841 --rc genhtml_branch_coverage=1 00:06:57.841 --rc genhtml_function_coverage=1 00:06:57.841 --rc genhtml_legend=1 00:06:57.841 --rc geninfo_all_blocks=1 00:06:57.841 --rc geninfo_unexecuted_blocks=1 00:06:57.841 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.841 ' 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.841 --rc genhtml_branch_coverage=1 00:06:57.841 --rc genhtml_function_coverage=1 00:06:57.841 --rc genhtml_legend=1 00:06:57.841 --rc geninfo_all_blocks=1 00:06:57.841 --rc geninfo_unexecuted_blocks=1 00:06:57.841 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.841 ' 00:06:57.841 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.841 --rc genhtml_branch_coverage=1 00:06:57.841 --rc genhtml_function_coverage=1 00:06:57.841 --rc genhtml_legend=1 00:06:57.841 --rc geninfo_all_blocks=1 00:06:57.841 --rc geninfo_unexecuted_blocks=1 00:06:57.842 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.842 ' 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.842 --rc genhtml_branch_coverage=1 00:06:57.842 --rc genhtml_function_coverage=1 00:06:57.842 --rc genhtml_legend=1 00:06:57.842 --rc geninfo_all_blocks=1 00:06:57.842 --rc geninfo_unexecuted_blocks=1 00:06:57.842 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:57.842 ' 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:06:57.842 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:57.843 #define SPDK_CONFIG_H 00:06:57.843 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:57.843 #define SPDK_CONFIG_APPS 1 00:06:57.843 #define SPDK_CONFIG_ARCH native 00:06:57.843 #undef SPDK_CONFIG_ASAN 00:06:57.843 #undef SPDK_CONFIG_AVAHI 00:06:57.843 #undef SPDK_CONFIG_CET 00:06:57.843 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:57.843 #define SPDK_CONFIG_COVERAGE 1 00:06:57.843 #define SPDK_CONFIG_CROSS_PREFIX 00:06:57.843 #undef SPDK_CONFIG_CRYPTO 00:06:57.843 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:57.843 #undef SPDK_CONFIG_CUSTOMOCF 00:06:57.843 #undef SPDK_CONFIG_DAOS 00:06:57.843 #define SPDK_CONFIG_DAOS_DIR 00:06:57.843 #define SPDK_CONFIG_DEBUG 1 00:06:57.843 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:57.843 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:57.843 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:57.843 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:57.843 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:57.843 #undef SPDK_CONFIG_DPDK_UADK 00:06:57.843 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:57.843 #define SPDK_CONFIG_EXAMPLES 1 00:06:57.843 #undef SPDK_CONFIG_FC 00:06:57.843 #define SPDK_CONFIG_FC_PATH 00:06:57.843 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:57.843 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:57.843 #define SPDK_CONFIG_FSDEV 1 00:06:57.843 #undef SPDK_CONFIG_FUSE 00:06:57.843 #define SPDK_CONFIG_FUZZER 1 00:06:57.843 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:57.843 #undef SPDK_CONFIG_GOLANG 00:06:57.843 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:57.843 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:57.843 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:57.843 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:57.843 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:57.843 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:57.843 #undef SPDK_CONFIG_HAVE_LZ4 00:06:57.843 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:57.843 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:57.843 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:57.843 #define SPDK_CONFIG_IDXD 1 00:06:57.843 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:57.843 #undef SPDK_CONFIG_IPSEC_MB 00:06:57.843 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:57.843 #define SPDK_CONFIG_ISAL 1 00:06:57.843 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:57.843 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:57.843 #define SPDK_CONFIG_LIBDIR 00:06:57.843 #undef SPDK_CONFIG_LTO 00:06:57.843 #define SPDK_CONFIG_MAX_LCORES 128 00:06:57.843 #define SPDK_CONFIG_NVME_CUSE 1 00:06:57.843 #undef SPDK_CONFIG_OCF 00:06:57.843 #define SPDK_CONFIG_OCF_PATH 00:06:57.843 #define SPDK_CONFIG_OPENSSL_PATH 00:06:57.843 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:57.843 #define SPDK_CONFIG_PGO_DIR 00:06:57.843 #undef SPDK_CONFIG_PGO_USE 00:06:57.843 #define SPDK_CONFIG_PREFIX /usr/local 00:06:57.843 #undef SPDK_CONFIG_RAID5F 00:06:57.843 #undef SPDK_CONFIG_RBD 00:06:57.843 #define SPDK_CONFIG_RDMA 1 00:06:57.843 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:57.843 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:57.843 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:57.843 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:57.843 #undef SPDK_CONFIG_SHARED 00:06:57.843 #undef SPDK_CONFIG_SMA 00:06:57.843 #define SPDK_CONFIG_TESTS 1 00:06:57.843 #undef SPDK_CONFIG_TSAN 00:06:57.843 #define SPDK_CONFIG_UBLK 1 00:06:57.843 #define SPDK_CONFIG_UBSAN 1 00:06:57.843 #undef SPDK_CONFIG_UNIT_TESTS 00:06:57.843 #undef SPDK_CONFIG_URING 00:06:57.843 #define SPDK_CONFIG_URING_PATH 00:06:57.843 #undef SPDK_CONFIG_URING_ZNS 00:06:57.843 #undef SPDK_CONFIG_USDT 00:06:57.843 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:57.843 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:57.843 #define SPDK_CONFIG_VFIO_USER 1 00:06:57.843 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:57.843 #define SPDK_CONFIG_VHOST 1 00:06:57.843 #define SPDK_CONFIG_VIRTIO 1 00:06:57.843 #undef SPDK_CONFIG_VTUNE 00:06:57.843 #define SPDK_CONFIG_VTUNE_DIR 00:06:57.843 #define SPDK_CONFIG_WERROR 1 00:06:57.843 #define SPDK_CONFIG_WPDK_DIR 00:06:57.843 #undef SPDK_CONFIG_XNVME 00:06:57.843 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:57.843 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:57.844 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1488517 ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1488517 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.EFxYBG 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.EFxYBG/tests/vfio /tmp/spdk.EFxYBG 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:06:57.845 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=678330368 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4606099456 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=52981436416 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730590720 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8749154304 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860529664 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865293312 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340125696 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346118144 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864310272 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865297408 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=987136 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:06:57.846 * Looking for test storage... 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=52981436416 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10963746816 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.846 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.846 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.105 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.106 --rc genhtml_branch_coverage=1 00:06:58.106 --rc genhtml_function_coverage=1 00:06:58.106 --rc genhtml_legend=1 00:06:58.106 --rc geninfo_all_blocks=1 00:06:58.106 --rc geninfo_unexecuted_blocks=1 00:06:58.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.106 ' 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.106 --rc genhtml_branch_coverage=1 00:06:58.106 --rc genhtml_function_coverage=1 00:06:58.106 --rc genhtml_legend=1 00:06:58.106 --rc geninfo_all_blocks=1 00:06:58.106 --rc geninfo_unexecuted_blocks=1 00:06:58.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.106 ' 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.106 --rc genhtml_branch_coverage=1 00:06:58.106 --rc genhtml_function_coverage=1 00:06:58.106 --rc genhtml_legend=1 00:06:58.106 --rc geninfo_all_blocks=1 00:06:58.106 --rc geninfo_unexecuted_blocks=1 00:06:58.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.106 ' 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.106 --rc genhtml_branch_coverage=1 00:06:58.106 --rc genhtml_function_coverage=1 00:06:58.106 --rc genhtml_legend=1 00:06:58.106 --rc geninfo_all_blocks=1 00:06:58.106 --rc geninfo_unexecuted_blocks=1 00:06:58.106 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.106 ' 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:06:58.106 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:06:58.106 17:56:19 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:06:58.106 [2024-10-05 17:56:19.380606] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:58.106 [2024-10-05 17:56:19.380681] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488706 ] 00:06:58.106 [2024-10-05 17:56:19.454526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.106 [2024-10-05 17:56:19.529987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.364 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.364 INFO: Seed: 2395250469 00:06:58.364 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:06:58.364 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:06:58.364 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:06:58.364 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.364 #2 INITED exec/s: 0 rss: 68Mb 00:06:58.364 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.364 This may also happen if the target rejected all inputs we tried so far 00:06:58.364 [2024-10-05 17:56:19.767714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:06:58.879 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:06:58.879 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:06:58.879 #21 NEW cov: 11118 ft: 10807 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 InsertRepeatedBytes-CrossOver-ChangeBit-CopyPart- 00:06:59.136 #22 NEW cov: 11135 ft: 14463 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:06:59.393 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:06:59.393 #28 NEW cov: 11152 ft: 15467 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:06:59.393 #29 NEW cov: 11152 ft: 16863 corp: 5/25b lim: 6 exec/s: 29 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:06:59.650 #34 NEW cov: 11152 ft: 17284 corp: 6/31b lim: 6 exec/s: 34 rss: 75Mb L: 6/6 MS: 5 ShuffleBytes-CopyPart-InsertRepeatedBytes-ChangeBit-CopyPart- 00:06:59.907 #36 NEW cov: 11152 ft: 17580 corp: 7/37b lim: 6 exec/s: 36 rss: 76Mb L: 6/6 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:00.165 #37 NEW cov: 11152 ft: 17668 corp: 8/43b lim: 6 exec/s: 37 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:00.422 #43 NEW cov: 11159 ft: 17686 corp: 9/49b lim: 6 exec/s: 43 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:00.422 #44 NEW cov: 11159 ft: 17853 corp: 10/55b lim: 6 exec/s: 22 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:00.422 #44 DONE cov: 11159 ft: 17853 corp: 10/55b lim: 6 exec/s: 22 rss: 76Mb 00:07:00.422 Done 44 runs in 2 second(s) 00:07:00.422 [2024-10-05 17:56:21.854390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:00.680 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:00.680 17:56:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:00.938 [2024-10-05 17:56:22.148206] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:00.938 [2024-10-05 17:56:22.148279] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489168 ] 00:07:00.938 [2024-10-05 17:56:22.220175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.938 [2024-10-05 17:56:22.293421] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.196 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.196 INFO: Seed: 861271652 00:07:01.196 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:01.196 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:01.196 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:01.196 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.196 #2 INITED exec/s: 0 rss: 68Mb 00:07:01.196 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.196 This may also happen if the target rejected all inputs we tried so far 00:07:01.196 [2024-10-05 17:56:22.536322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:01.196 [2024-10-05 17:56:22.578209] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:01.196 [2024-10-05 17:56:22.578239] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:01.196 [2024-10-05 17:56:22.578258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:01.710 NEW_FUNC[1/662]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:01.710 NEW_FUNC[2/662]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:01.710 #41 NEW cov: 10860 ft: 11070 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 InsertByte-ChangeBit-ChangeByte-CopyPart- 00:07:01.711 [2024-10-05 17:56:23.062028] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:01.711 [2024-10-05 17:56:23.062065] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:01.711 [2024-10-05 17:56:23.062083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:01.968 NEW_FUNC[1/10]: 0x443a18 in write_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:353 00:07:01.968 NEW_FUNC[2/10]: 0x452098 in spdk_bdev_io_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/bdev_module.h:1444 00:07:01.968 #47 NEW cov: 11123 ft: 14511 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:01.968 [2024-10-05 17:56:23.253440] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:01.968 [2024-10-05 17:56:23.253465] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:01.968 [2024-10-05 17:56:23.253483] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:01.968 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:01.968 #48 NEW cov: 11140 ft: 15057 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:07:02.225 [2024-10-05 17:56:23.435970] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.225 [2024-10-05 17:56:23.435994] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.225 [2024-10-05 17:56:23.436011] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.225 #49 NEW cov: 11140 ft: 16681 corp: 5/17b lim: 4 exec/s: 49 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:02.225 [2024-10-05 17:56:23.615596] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.225 [2024-10-05 17:56:23.615622] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.225 [2024-10-05 17:56:23.615640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.482 #65 NEW cov: 11140 ft: 17462 corp: 6/21b lim: 4 exec/s: 65 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:02.482 [2024-10-05 17:56:23.802082] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.482 [2024-10-05 17:56:23.802106] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.482 [2024-10-05 17:56:23.802124] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.482 #66 NEW cov: 11140 ft: 17608 corp: 7/25b lim: 4 exec/s: 66 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:02.740 [2024-10-05 17:56:23.970835] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.740 [2024-10-05 17:56:23.970859] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.740 [2024-10-05 17:56:23.970876] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.740 #67 NEW cov: 11140 ft: 17969 corp: 8/29b lim: 4 exec/s: 67 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:02.740 [2024-10-05 17:56:24.136747] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.740 [2024-10-05 17:56:24.136770] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.740 [2024-10-05 17:56:24.136787] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.997 #68 NEW cov: 11140 ft: 18277 corp: 9/33b lim: 4 exec/s: 68 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:07:02.997 [2024-10-05 17:56:24.302545] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:02.997 [2024-10-05 17:56:24.302568] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:02.997 [2024-10-05 17:56:24.302585] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:02.997 #69 NEW cov: 11147 ft: 18314 corp: 10/37b lim: 4 exec/s: 69 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:07:03.255 [2024-10-05 17:56:24.470473] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:03.255 [2024-10-05 17:56:24.470495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:03.255 [2024-10-05 17:56:24.470513] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:03.255 #70 NEW cov: 11147 ft: 18369 corp: 11/41b lim: 4 exec/s: 35 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:03.255 #70 DONE cov: 11147 ft: 18369 corp: 11/41b lim: 4 exec/s: 35 rss: 76Mb 00:07:03.255 Done 70 runs in 2 second(s) 00:07:03.255 [2024-10-05 17:56:24.596379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:03.513 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:03.513 17:56:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:03.513 [2024-10-05 17:56:24.886136] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:03.513 [2024-10-05 17:56:24.886234] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489544 ] 00:07:03.513 [2024-10-05 17:56:24.959662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.771 [2024-10-05 17:56:25.032041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.771 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.771 INFO: Seed: 3601301718 00:07:04.029 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:04.030 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:04.030 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:04.030 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.030 #2 INITED exec/s: 0 rss: 67Mb 00:07:04.030 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:04.030 This may also happen if the target rejected all inputs we tried so far 00:07:04.030 [2024-10-05 17:56:25.269880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:04.030 [2024-10-05 17:56:25.330085] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:04.288 NEW_FUNC[1/672]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:04.288 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:04.288 #55 NEW cov: 11104 ft: 10771 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:04.545 [2024-10-05 17:56:25.790253] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:04.545 #61 NEW cov: 11121 ft: 13802 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:07:04.545 [2024-10-05 17:56:25.984037] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:04.803 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:04.803 #62 NEW cov: 11138 ft: 15713 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:04.803 [2024-10-05 17:56:26.186376] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.061 #63 NEW cov: 11138 ft: 15987 corp: 5/33b lim: 8 exec/s: 63 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:05.061 [2024-10-05 17:56:26.378278] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.061 #69 NEW cov: 11138 ft: 16444 corp: 6/41b lim: 8 exec/s: 69 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:05.318 [2024-10-05 17:56:26.571369] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.318 #70 NEW cov: 11138 ft: 16765 corp: 7/49b lim: 8 exec/s: 70 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:05.318 [2024-10-05 17:56:26.759572] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.576 #71 NEW cov: 11138 ft: 17584 corp: 8/57b lim: 8 exec/s: 71 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:05.576 [2024-10-05 17:56:26.943001] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.833 #72 NEW cov: 11145 ft: 17826 corp: 9/65b lim: 8 exec/s: 72 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:07:05.833 [2024-10-05 17:56:27.123482] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:05.833 #73 NEW cov: 11145 ft: 17952 corp: 10/73b lim: 8 exec/s: 73 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:06.091 [2024-10-05 17:56:27.304771] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:06.091 #74 NEW cov: 11145 ft: 18202 corp: 11/81b lim: 8 exec/s: 37 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:06.091 #74 DONE cov: 11145 ft: 18202 corp: 11/81b lim: 8 exec/s: 37 rss: 75Mb 00:07:06.091 Done 74 runs in 2 second(s) 00:07:06.091 [2024-10-05 17:56:27.433385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:06.349 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:06.349 17:56:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:06.349 [2024-10-05 17:56:27.728331] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:06.349 [2024-10-05 17:56:27.728418] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490073 ] 00:07:06.349 [2024-10-05 17:56:27.801432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.607 [2024-10-05 17:56:27.873085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.607 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.607 INFO: Seed: 2151311785 00:07:06.866 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:06.866 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:06.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:06.866 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.866 #2 INITED exec/s: 0 rss: 68Mb 00:07:06.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.866 This may also happen if the target rejected all inputs we tried so far 00:07:06.866 [2024-10-05 17:56:28.118541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:07.123 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:07.123 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:07.123 #12 NEW cov: 11111 ft: 10950 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 5 CrossOver-CrossOver-InsertByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:07.380 #13 NEW cov: 11125 ft: 13824 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:07.636 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:07.636 #24 NEW cov: 11142 ft: 14421 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:07.894 #25 NEW cov: 11142 ft: 14591 corp: 5/129b lim: 32 exec/s: 25 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:07.894 #26 NEW cov: 11142 ft: 15365 corp: 6/161b lim: 32 exec/s: 26 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:08.152 #27 NEW cov: 11142 ft: 16178 corp: 7/193b lim: 32 exec/s: 27 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:08.411 #28 NEW cov: 11142 ft: 16505 corp: 8/225b lim: 32 exec/s: 28 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:08.411 #29 NEW cov: 11142 ft: 16617 corp: 9/257b lim: 32 exec/s: 29 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:08.669 #30 NEW cov: 11149 ft: 16931 corp: 10/289b lim: 32 exec/s: 30 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:08.962 #31 NEW cov: 11149 ft: 17055 corp: 11/321b lim: 32 exec/s: 15 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:08.962 #31 DONE cov: 11149 ft: 17055 corp: 11/321b lim: 32 exec/s: 15 rss: 76Mb 00:07:08.962 Done 31 runs in 2 second(s) 00:07:08.962 [2024-10-05 17:56:30.209378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:09.257 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:09.257 17:56:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:09.257 [2024-10-05 17:56:30.503845] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:09.257 [2024-10-05 17:56:30.503916] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490609 ] 00:07:09.257 [2024-10-05 17:56:30.578342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.257 [2024-10-05 17:56:30.654506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.516 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.516 INFO: Seed: 635347807 00:07:09.516 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:09.516 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:09.516 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:09.516 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.516 #2 INITED exec/s: 0 rss: 67Mb 00:07:09.516 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.516 This may also happen if the target rejected all inputs we tried so far 00:07:09.516 [2024-10-05 17:56:30.901687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:09.516 [2024-10-05 17:56:30.961220] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=327 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:09.516 [2024-10-05 17:56:30.961247] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:09.516 [2024-10-05 17:56:30.961257] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:09.516 [2024-10-05 17:56:30.961275] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:09.516 [2024-10-05 17:56:30.962203] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:09.516 [2024-10-05 17:56:30.962216] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:09.516 [2024-10-05 17:56:30.962231] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:10.034 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:10.034 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:10.034 #89 NEW cov: 11122 ft: 10940 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:10.034 [2024-10-05 17:56:31.441436] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x20000000, 0x20000000) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:10.034 [2024-10-05 17:56:31.441475] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x20000000, 0x20000000) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:10.034 [2024-10-05 17:56:31.441487] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:10.034 [2024-10-05 17:56:31.441504] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:10.034 [2024-10-05 17:56:31.442438] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x20000000, 0x20000000) flags=0: No such file or directory 00:07:10.034 [2024-10-05 17:56:31.442459] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:10.034 [2024-10-05 17:56:31.442475] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:10.293 #90 NEW cov: 11140 ft: 14126 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:10.293 [2024-10-05 17:56:31.625164] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:10.293 [2024-10-05 17:56:31.625194] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:10.293 [2024-10-05 17:56:31.625205] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:10.293 [2024-10-05 17:56:31.625221] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:10.293 [2024-10-05 17:56:31.626171] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:10.293 [2024-10-05 17:56:31.626198] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:10.293 [2024-10-05 17:56:31.626213] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:10.293 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:10.293 #91 NEW cov: 11157 ft: 15852 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:10.551 [2024-10-05 17:56:31.816988] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x8020000000, 0x8020000000) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:10.551 [2024-10-05 17:56:31.817012] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x8020000000, 0x8020000000) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:10.551 [2024-10-05 17:56:31.817023] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:10.551 [2024-10-05 17:56:31.817040] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:10.551 [2024-10-05 17:56:31.817978] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x8020000000, 0x8020000000) flags=0: No such file or directory 00:07:10.551 [2024-10-05 17:56:31.817998] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:10.551 [2024-10-05 17:56:31.818015] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:10.551 #92 NEW cov: 11157 ft: 16516 corp: 5/129b lim: 32 exec/s: 92 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:10.551 [2024-10-05 17:56:32.003383] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x20000000, 0x20000000) fd=329 offset=0x3b00020000000000 prot=0x3: Invalid argument 00:07:10.551 [2024-10-05 17:56:32.003414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x20000000, 0x20000000) offset=0x3b00020000000000 flags=0x3: Invalid argument 00:07:10.551 [2024-10-05 17:56:32.003425] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:10.551 [2024-10-05 17:56:32.003441] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:10.551 [2024-10-05 17:56:32.004374] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x20000000, 0x20000000) flags=0: No such file or directory 00:07:10.551 [2024-10-05 17:56:32.004395] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:10.551 [2024-10-05 17:56:32.004411] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:10.809 #93 NEW cov: 11157 ft: 16914 corp: 6/161b lim: 32 exec/s: 93 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:10.809 [2024-10-05 17:56:32.197887] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x20000000, 0x20000000) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:10.809 [2024-10-05 17:56:32.197912] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x20000000, 0x20000000) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:10.809 [2024-10-05 17:56:32.197923] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:10.809 [2024-10-05 17:56:32.197939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:10.809 [2024-10-05 17:56:32.198878] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x20000000, 0x20000000) flags=0: No such file or directory 00:07:10.809 [2024-10-05 17:56:32.198898] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:10.809 [2024-10-05 17:56:32.198915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:11.066 #94 NEW cov: 11157 ft: 17019 corp: 7/193b lim: 32 exec/s: 94 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:11.066 [2024-10-05 17:56:32.379540] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:11.066 [2024-10-05 17:56:32.379564] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:11.066 [2024-10-05 17:56:32.379575] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:11.066 [2024-10-05 17:56:32.379591] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:11.066 [2024-10-05 17:56:32.380527] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:11.066 [2024-10-05 17:56:32.380547] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:11.066 [2024-10-05 17:56:32.380562] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:11.066 #95 NEW cov: 11157 ft: 17151 corp: 8/225b lim: 32 exec/s: 95 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:11.324 [2024-10-05 17:56:32.563063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x20000008, 0x20000008) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.563088] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x20000008, 0x20000008) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.563100] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.563116] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:11.324 [2024-10-05 17:56:32.564102] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x20000008, 0x20000008) flags=0: No such file or directory 00:07:11.324 [2024-10-05 17:56:32.564125] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:11.324 [2024-10-05 17:56:32.564142] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:11.324 #96 NEW cov: 11164 ft: 17651 corp: 9/257b lim: 32 exec/s: 96 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:11.324 [2024-10-05 17:56:32.747667] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=329 offset=0x3b00000000000000 prot=0x3: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.747691] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x3b00000000000000 flags=0x3: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.747701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:11.324 [2024-10-05 17:56:32.747717] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:11.324 [2024-10-05 17:56:32.748654] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:11.324 [2024-10-05 17:56:32.748673] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:11.324 [2024-10-05 17:56:32.748688] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:11.582 #97 NEW cov: 11164 ft: 18115 corp: 10/289b lim: 32 exec/s: 97 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:11.582 [2024-10-05 17:56:32.934052] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x20000000, 0x20000000) fd=329 offset=0x3b00020200000000 prot=0x3: Invalid argument 00:07:11.582 [2024-10-05 17:56:32.934077] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x20000000, 0x20000000) offset=0x3b00020200000000 flags=0x3: Invalid argument 00:07:11.582 [2024-10-05 17:56:32.934088] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:11.582 [2024-10-05 17:56:32.934105] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:11.583 [2024-10-05 17:56:32.935070] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x20000000, 0x20000000) flags=0: No such file or directory 00:07:11.583 [2024-10-05 17:56:32.935091] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:11.583 [2024-10-05 17:56:32.935107] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:11.583 #103 NEW cov: 11164 ft: 18262 corp: 11/321b lim: 32 exec/s: 51 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:11.583 #103 DONE cov: 11164 ft: 18262 corp: 11/321b lim: 32 exec/s: 51 rss: 76Mb 00:07:11.583 Done 103 runs in 2 second(s) 00:07:11.840 [2024-10-05 17:56:33.060268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:12.098 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:12.098 17:56:33 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:12.098 [2024-10-05 17:56:33.361254] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:12.098 [2024-10-05 17:56:33.361324] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491157 ] 00:07:12.098 [2024-10-05 17:56:33.431661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.098 [2024-10-05 17:56:33.499484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.355 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.355 INFO: Seed: 3477346892 00:07:12.355 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:12.355 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:12.355 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:12.355 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.355 #2 INITED exec/s: 0 rss: 67Mb 00:07:12.355 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.355 This may also happen if the target rejected all inputs we tried so far 00:07:12.355 [2024-10-05 17:56:33.735052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:12.355 [2024-10-05 17:56:33.780232] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:12.355 [2024-10-05 17:56:33.780268] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:12.869 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:12.869 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:12.869 #17 NEW cov: 11126 ft: 11084 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 5 InsertRepeatedBytes-InsertByte-ChangeBinInt-ChangeBinInt-CopyPart- 00:07:12.869 [2024-10-05 17:56:34.241747] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:12.869 [2024-10-05 17:56:34.241789] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.126 #23 NEW cov: 11140 ft: 13391 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:13.126 [2024-10-05 17:56:34.425627] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.126 [2024-10-05 17:56:34.425661] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.126 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:13.126 #26 NEW cov: 11157 ft: 14037 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 3 CrossOver-CrossOver-CopyPart- 00:07:13.384 [2024-10-05 17:56:34.618241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.384 [2024-10-05 17:56:34.618273] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.384 #32 NEW cov: 11157 ft: 14161 corp: 5/53b lim: 13 exec/s: 32 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:13.384 [2024-10-05 17:56:34.800616] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.384 [2024-10-05 17:56:34.800647] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.642 #41 NEW cov: 11157 ft: 15016 corp: 6/66b lim: 13 exec/s: 41 rss: 75Mb L: 13/13 MS: 4 CrossOver-ShuffleBytes-ChangeBit-CopyPart- 00:07:13.642 [2024-10-05 17:56:34.983442] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.642 [2024-10-05 17:56:34.983474] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.642 #47 NEW cov: 11157 ft: 15744 corp: 7/79b lim: 13 exec/s: 47 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:13.900 [2024-10-05 17:56:35.162429] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.900 [2024-10-05 17:56:35.162460] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:13.900 #53 NEW cov: 11157 ft: 16326 corp: 8/92b lim: 13 exec/s: 53 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:13.900 [2024-10-05 17:56:35.343615] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:13.900 [2024-10-05 17:56:35.343645] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:14.158 #59 NEW cov: 11157 ft: 16498 corp: 9/105b lim: 13 exec/s: 59 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:14.158 [2024-10-05 17:56:35.524014] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:14.158 [2024-10-05 17:56:35.524045] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:14.417 #60 NEW cov: 11164 ft: 16581 corp: 10/118b lim: 13 exec/s: 60 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:07:14.417 [2024-10-05 17:56:35.704554] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:14.417 [2024-10-05 17:56:35.704586] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:14.417 #66 NEW cov: 11164 ft: 16889 corp: 11/131b lim: 13 exec/s: 33 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:14.417 #66 DONE cov: 11164 ft: 16889 corp: 11/131b lim: 13 exec/s: 33 rss: 76Mb 00:07:14.417 Done 66 runs in 2 second(s) 00:07:14.417 [2024-10-05 17:56:35.829378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:14.675 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:14.676 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:14.676 17:56:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:14.676 [2024-10-05 17:56:36.123280] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:14.676 [2024-10-05 17:56:36.123350] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491651 ] 00:07:14.934 [2024-10-05 17:56:36.196015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.934 [2024-10-05 17:56:36.267224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.192 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.192 INFO: Seed: 1952375021 00:07:15.192 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:07:15.192 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:07:15.192 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:15.192 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.192 #2 INITED exec/s: 0 rss: 67Mb 00:07:15.192 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.192 This may also happen if the target rejected all inputs we tried so far 00:07:15.192 [2024-10-05 17:56:36.503803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:15.192 [2024-10-05 17:56:36.557215] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:15.192 [2024-10-05 17:56:36.557249] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:15.708 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:15.708 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:15.708 #23 NEW cov: 11118 ft: 10927 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:15.708 [2024-10-05 17:56:37.036922] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:15.708 [2024-10-05 17:56:37.036965] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:15.708 #28 NEW cov: 11132 ft: 14057 corp: 3/19b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 ChangeByte-CrossOver-CopyPart-InsertByte-CopyPart- 00:07:15.966 [2024-10-05 17:56:37.235805] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:15.966 [2024-10-05 17:56:37.235839] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:15.966 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:15.966 #29 NEW cov: 11149 ft: 15281 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:15.966 [2024-10-05 17:56:37.426614] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:15.966 [2024-10-05 17:56:37.426646] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:16.224 #30 NEW cov: 11149 ft: 16515 corp: 5/37b lim: 9 exec/s: 30 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:16.224 [2024-10-05 17:56:37.629640] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:16.224 [2024-10-05 17:56:37.629672] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:16.481 #31 NEW cov: 11149 ft: 16967 corp: 6/46b lim: 9 exec/s: 31 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:16.481 [2024-10-05 17:56:37.818491] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:16.481 [2024-10-05 17:56:37.818522] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:16.481 #32 NEW cov: 11149 ft: 17332 corp: 7/55b lim: 9 exec/s: 32 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:16.738 [2024-10-05 17:56:38.005503] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:16.738 [2024-10-05 17:56:38.005534] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:16.738 #33 NEW cov: 11149 ft: 17594 corp: 8/64b lim: 9 exec/s: 33 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:07:16.738 [2024-10-05 17:56:38.195066] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:16.738 [2024-10-05 17:56:38.195097] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:16.995 #34 NEW cov: 11156 ft: 17859 corp: 9/73b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:07:16.995 [2024-10-05 17:56:38.382956] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:16.995 [2024-10-05 17:56:38.382986] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:17.253 #35 NEW cov: 11156 ft: 17996 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:07:17.253 #35 DONE cov: 11156 ft: 17996 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb 00:07:17.253 Done 35 runs in 2 second(s) 00:07:17.253 [2024-10-05 17:56:38.519372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:17.511 00:07:17.511 real 0m19.901s 00:07:17.511 user 0m27.932s 00:07:17.511 sys 0m1.912s 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.511 17:56:38 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 ************************************ 00:07:17.511 END TEST vfio_llvm_fuzz 00:07:17.511 ************************************ 00:07:17.511 00:07:17.511 real 1m24.728s 00:07:17.511 user 2m8.951s 00:07:17.511 sys 0m9.444s 00:07:17.511 17:56:38 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.511 17:56:38 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 ************************************ 00:07:17.511 END TEST llvm_fuzz 00:07:17.511 ************************************ 00:07:17.511 17:56:38 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:07:17.511 17:56:38 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:07:17.511 17:56:38 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:07:17.511 17:56:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.511 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 17:56:38 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:07:17.511 17:56:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:17.511 17:56:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:17.511 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.082 INFO: APP EXITING 00:07:24.082 INFO: killing all VMs 00:07:24.082 INFO: killing vhost app 00:07:24.082 INFO: EXIT DONE 00:07:26.617 Waiting for block devices as requested 00:07:26.875 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:26.875 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:26.875 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:27.133 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:27.133 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:27.133 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:27.133 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:27.391 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:27.391 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:27.391 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:27.650 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:27.650 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:27.650 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:27.908 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:27.908 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:27.908 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:28.167 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:31.450 Cleaning 00:07:31.450 Removing: /dev/shm/spdk_tgt_trace.pid1462894 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1460188 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1461404 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1462894 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1463881 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1464759 00:07:31.450 Removing: /var/run/dpdk/spdk_pid1465026 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1465916 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1466106 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1466529 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1466905 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1467237 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1467502 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1467659 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1467944 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1468229 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1468551 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1469396 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1472448 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1472684 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1472900 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1472946 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1473588 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1473724 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1474303 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1474419 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1474859 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1474877 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1475166 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1475188 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1475816 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1476100 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1476277 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1476469 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1477216 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1477558 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1478041 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1478571 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1478998 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1479395 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1479930 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1480341 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1480754 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1481283 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1481673 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1482110 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1482645 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1483069 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1483481 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1484015 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1484446 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1484838 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1485373 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1485757 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1486191 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1486729 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1487058 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1487552 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1488083 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1488706 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1489168 00:07:31.709 Removing: /var/run/dpdk/spdk_pid1489544 00:07:31.967 Removing: /var/run/dpdk/spdk_pid1490073 00:07:31.967 Removing: /var/run/dpdk/spdk_pid1490609 00:07:31.967 Removing: /var/run/dpdk/spdk_pid1491157 00:07:31.967 Removing: /var/run/dpdk/spdk_pid1491651 00:07:31.967 Clean 00:07:31.967 17:56:53 -- common/autotest_common.sh@1451 -- # return 0 00:07:31.967 17:56:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:07:31.967 17:56:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.967 17:56:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.967 17:56:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:07:31.967 17:56:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.967 17:56:53 -- common/autotest_common.sh@10 -- # set +x 00:07:31.967 17:56:53 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:31.967 17:56:53 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:31.967 17:56:53 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:31.967 17:56:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:07:31.967 17:56:53 -- spdk/autotest.sh@394 -- # hostname 00:07:31.967 17:56:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:07:32.225 geninfo: WARNING: invalid characters removed from testname! 00:07:38.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:07:38.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:07:45.345 17:57:05 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:51.944 17:57:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:57.214 17:57:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:02.482 17:57:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:07.753 17:57:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:13.023 17:57:34 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:18.373 17:57:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:18.373 17:57:39 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:08:18.373 17:57:39 -- common/autotest_common.sh@1681 -- $ lcov --version 00:08:18.373 17:57:39 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:08:18.373 17:57:39 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:08:18.373 17:57:39 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:08:18.373 17:57:39 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:18.373 17:57:39 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:18.373 17:57:39 -- scripts/common.sh@336 -- $ IFS=.-: 00:08:18.373 17:57:39 -- scripts/common.sh@336 -- $ read -ra ver1 00:08:18.373 17:57:39 -- scripts/common.sh@337 -- $ IFS=.-: 00:08:18.373 17:57:39 -- scripts/common.sh@337 -- $ read -ra ver2 00:08:18.373 17:57:39 -- scripts/common.sh@338 -- $ local 'op=<' 00:08:18.373 17:57:39 -- scripts/common.sh@340 -- $ ver1_l=2 00:08:18.373 17:57:39 -- scripts/common.sh@341 -- $ ver2_l=1 00:08:18.373 17:57:39 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:18.373 17:57:39 -- scripts/common.sh@344 -- $ case "$op" in 00:08:18.373 17:57:39 -- scripts/common.sh@345 -- $ : 1 00:08:18.373 17:57:39 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:18.373 17:57:39 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.373 17:57:39 -- scripts/common.sh@365 -- $ decimal 1 00:08:18.373 17:57:39 -- scripts/common.sh@353 -- $ local d=1 00:08:18.373 17:57:39 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:08:18.373 17:57:39 -- scripts/common.sh@355 -- $ echo 1 00:08:18.373 17:57:39 -- scripts/common.sh@365 -- $ ver1[v]=1 00:08:18.373 17:57:39 -- scripts/common.sh@366 -- $ decimal 2 00:08:18.373 17:57:39 -- scripts/common.sh@353 -- $ local d=2 00:08:18.373 17:57:39 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:08:18.373 17:57:39 -- scripts/common.sh@355 -- $ echo 2 00:08:18.373 17:57:39 -- scripts/common.sh@366 -- $ ver2[v]=2 00:08:18.374 17:57:39 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:18.374 17:57:39 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:08:18.374 17:57:39 -- scripts/common.sh@368 -- $ return 0 00:08:18.374 17:57:39 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.374 17:57:39 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:08:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.374 --rc genhtml_branch_coverage=1 00:08:18.374 --rc genhtml_function_coverage=1 00:08:18.374 --rc genhtml_legend=1 00:08:18.374 --rc geninfo_all_blocks=1 00:08:18.374 --rc geninfo_unexecuted_blocks=1 00:08:18.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:18.374 ' 00:08:18.374 17:57:39 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:08:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.374 --rc genhtml_branch_coverage=1 00:08:18.374 --rc genhtml_function_coverage=1 00:08:18.374 --rc genhtml_legend=1 00:08:18.374 --rc geninfo_all_blocks=1 00:08:18.374 --rc geninfo_unexecuted_blocks=1 00:08:18.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:18.374 ' 00:08:18.374 17:57:39 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:08:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.374 --rc genhtml_branch_coverage=1 00:08:18.374 --rc genhtml_function_coverage=1 00:08:18.374 --rc genhtml_legend=1 00:08:18.374 --rc geninfo_all_blocks=1 00:08:18.374 --rc geninfo_unexecuted_blocks=1 00:08:18.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:18.374 ' 00:08:18.374 17:57:39 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:08:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.374 --rc genhtml_branch_coverage=1 00:08:18.374 --rc genhtml_function_coverage=1 00:08:18.374 --rc genhtml_legend=1 00:08:18.374 --rc geninfo_all_blocks=1 00:08:18.374 --rc geninfo_unexecuted_blocks=1 00:08:18.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:18.374 ' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:18.374 17:57:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:18.374 17:57:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:18.374 17:57:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.374 17:57:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.374 17:57:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.374 17:57:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.374 17:57:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.374 17:57:39 -- paths/export.sh@5 -- $ export PATH 00:08:18.374 17:57:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.374 17:57:39 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:18.374 17:57:39 -- common/autobuild_common.sh@486 -- $ date +%s 00:08:18.374 17:57:39 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728143859.XXXXXX 00:08:18.374 17:57:39 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728143859.gE8RpM 00:08:18.374 17:57:39 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:08:18.374 17:57:39 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@502 -- $ get_config_params 00:08:18.374 17:57:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:08:18.374 17:57:39 -- common/autotest_common.sh@10 -- $ set +x 00:08:18.374 17:57:39 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:18.374 17:57:39 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:08:18.374 17:57:39 -- pm/common@17 -- $ local monitor 00:08:18.374 17:57:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.374 17:57:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.374 17:57:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.374 17:57:39 -- pm/common@21 -- $ date +%s 00:08:18.374 17:57:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:18.374 17:57:39 -- pm/common@21 -- $ date +%s 00:08:18.374 17:57:39 -- pm/common@21 -- $ date +%s 00:08:18.374 17:57:39 -- pm/common@25 -- $ sleep 1 00:08:18.374 17:57:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728143859 00:08:18.374 17:57:39 -- pm/common@21 -- $ date +%s 00:08:18.374 17:57:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728143859 00:08:18.374 17:57:39 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728143859 00:08:18.374 17:57:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728143859 00:08:18.374 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728143859_collect-cpu-temp.pm.log 00:08:18.374 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728143859_collect-cpu-load.pm.log 00:08:18.374 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728143859_collect-vmstat.pm.log 00:08:18.374 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728143859_collect-bmc-pm.bmc.pm.log 00:08:19.309 17:57:40 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:08:19.309 17:57:40 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:08:19.309 17:57:40 -- spdk/autopackage.sh@14 -- $ timing_finish 00:08:19.309 17:57:40 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:19.309 17:57:40 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:19.309 17:57:40 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:19.309 17:57:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:19.309 17:57:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:19.309 17:57:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:19.309 17:57:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:19.309 17:57:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:19.309 17:57:40 -- pm/common@44 -- $ pid=1500678 00:08:19.309 17:57:40 -- pm/common@50 -- $ kill -TERM 1500678 00:08:19.309 17:57:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:19.309 17:57:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:19.309 17:57:40 -- pm/common@44 -- $ pid=1500680 00:08:19.309 17:57:40 -- pm/common@50 -- $ kill -TERM 1500680 00:08:19.309 17:57:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:19.309 17:57:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:19.309 17:57:40 -- pm/common@44 -- $ pid=1500682 00:08:19.309 17:57:40 -- pm/common@50 -- $ kill -TERM 1500682 00:08:19.309 17:57:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:19.309 17:57:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:19.309 17:57:40 -- pm/common@44 -- $ pid=1500709 00:08:19.309 17:57:40 -- pm/common@50 -- $ sudo -E kill -TERM 1500709 00:08:19.309 + [[ -n 1350797 ]] 00:08:19.309 + sudo kill 1350797 00:08:19.318 [Pipeline] } 00:08:19.333 [Pipeline] // stage 00:08:19.338 [Pipeline] } 00:08:19.352 [Pipeline] // timeout 00:08:19.358 [Pipeline] } 00:08:19.371 [Pipeline] // catchError 00:08:19.376 [Pipeline] } 00:08:19.390 [Pipeline] // wrap 00:08:19.396 [Pipeline] } 00:08:19.410 [Pipeline] // catchError 00:08:19.418 [Pipeline] stage 00:08:19.421 [Pipeline] { (Epilogue) 00:08:19.433 [Pipeline] catchError 00:08:19.435 [Pipeline] { 00:08:19.447 [Pipeline] echo 00:08:19.449 Cleanup processes 00:08:19.455 [Pipeline] sh 00:08:19.736 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.736 1500836 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:19.736 1501244 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:19.749 [Pipeline] sh 00:08:20.030 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:20.030 ++ grep -v 'sudo pgrep' 00:08:20.030 ++ awk '{print $1}' 00:08:20.030 + sudo kill -9 1500836 00:08:20.030 + true 00:08:20.042 [Pipeline] sh 00:08:20.322 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:20.322 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:20.322 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:21.695 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:31.705 [Pipeline] sh 00:08:31.988 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:31.988 Artifacts sizes are good 00:08:32.002 [Pipeline] archiveArtifacts 00:08:32.008 Archiving artifacts 00:08:32.136 [Pipeline] sh 00:08:32.416 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:32.429 [Pipeline] cleanWs 00:08:32.437 [WS-CLEANUP] Deleting project workspace... 00:08:32.437 [WS-CLEANUP] Deferred wipeout is used... 00:08:32.443 [WS-CLEANUP] done 00:08:32.444 [Pipeline] } 00:08:32.461 [Pipeline] // catchError 00:08:32.471 [Pipeline] sh 00:08:32.747 + logger -p user.info -t JENKINS-CI 00:08:32.754 [Pipeline] } 00:08:32.769 [Pipeline] // stage 00:08:32.775 [Pipeline] } 00:08:32.790 [Pipeline] // node 00:08:32.796 [Pipeline] End of Pipeline 00:08:32.838 Finished: SUCCESS