00:00:00.001 Started by upstream project "autotest-per-patch" build number 131190 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.979 The recommended git tool is: git 00:00:01.979 using credential 00000000-0000-0000-0000-000000000002 00:00:01.982 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.993 Fetching changes from the remote Git repository 00:00:01.996 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.008 Using shallow fetch with depth 1 00:00:02.008 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.008 > git --version # timeout=10 00:00:02.019 > git --version # 'git version 2.39.2' 00:00:02.019 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.030 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.031 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.839 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.854 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.868 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:08.868 > git config core.sparsecheckout # timeout=10 00:00:08.883 > git read-tree -mu HEAD # timeout=10 00:00:08.901 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:08.922 Commit message: "packer: Bump java's version" 00:00:08.922 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:09.052 [Pipeline] Start of Pipeline 00:00:09.063 [Pipeline] library 00:00:09.064 Loading library shm_lib@master 00:00:09.064 Library shm_lib@master is cached. Copying from home. 00:00:09.081 [Pipeline] node 00:00:09.093 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:09.095 [Pipeline] { 00:00:09.105 [Pipeline] catchError 00:00:09.106 [Pipeline] { 00:00:09.118 [Pipeline] wrap 00:00:09.128 [Pipeline] { 00:00:09.134 [Pipeline] stage 00:00:09.136 [Pipeline] { (Prologue) 00:00:09.328 [Pipeline] sh 00:00:09.613 + logger -p user.info -t JENKINS-CI 00:00:09.631 [Pipeline] echo 00:00:09.633 Node: WFP49 00:00:09.639 [Pipeline] sh 00:00:09.937 [Pipeline] setCustomBuildProperty 00:00:09.949 [Pipeline] echo 00:00:09.950 Cleanup processes 00:00:09.955 [Pipeline] sh 00:00:10.242 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.242 3602524 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.253 [Pipeline] sh 00:00:10.536 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:10.536 ++ grep -v 'sudo pgrep' 00:00:10.536 ++ awk '{print $1}' 00:00:10.536 + sudo kill -9 00:00:10.536 + true 00:00:10.548 [Pipeline] cleanWs 00:00:10.560 [WS-CLEANUP] Deleting project workspace... 00:00:10.560 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.566 [WS-CLEANUP] done 00:00:10.569 [Pipeline] setCustomBuildProperty 00:00:10.580 [Pipeline] sh 00:00:10.859 + sudo git config --global --replace-all safe.directory '*' 00:00:10.954 [Pipeline] httpRequest 00:00:11.587 [Pipeline] echo 00:00:11.589 Sorcerer 10.211.164.101 is alive 00:00:11.599 [Pipeline] retry 00:00:11.601 [Pipeline] { 00:00:11.609 [Pipeline] httpRequest 00:00:11.614 HttpMethod: GET 00:00:11.614 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:11.615 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:11.638 Response Code: HTTP/1.1 200 OK 00:00:11.638 Success: Status code 200 is in the accepted range: 200,404 00:00:11.639 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:28.461 [Pipeline] } 00:00:28.477 [Pipeline] // retry 00:00:28.483 [Pipeline] sh 00:00:28.767 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:28.782 [Pipeline] httpRequest 00:00:29.709 [Pipeline] echo 00:00:29.710 Sorcerer 10.211.164.101 is alive 00:00:29.719 [Pipeline] retry 00:00:29.721 [Pipeline] { 00:00:29.735 [Pipeline] httpRequest 00:00:29.740 HttpMethod: GET 00:00:29.740 URL: http://10.211.164.101/packages/spdk_35c8daa94537363a349106f6e69e6a2f69497bde.tar.gz 00:00:29.740 Sending request to url: http://10.211.164.101/packages/spdk_35c8daa94537363a349106f6e69e6a2f69497bde.tar.gz 00:00:29.753 Response Code: HTTP/1.1 200 OK 00:00:29.753 Success: Status code 200 is in the accepted range: 200,404 00:00:29.754 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_35c8daa94537363a349106f6e69e6a2f69497bde.tar.gz 00:04:21.708 [Pipeline] } 00:04:21.725 [Pipeline] // retry 00:04:21.732 [Pipeline] sh 00:04:22.020 + tar --no-same-owner -xf spdk_35c8daa94537363a349106f6e69e6a2f69497bde.tar.gz 00:04:24.573 [Pipeline] sh 00:04:24.860 + git -C spdk log --oneline -n5 00:04:24.860 35c8daa94 nvme/poll_group: create and manage fd_group for nvme poll group 00:04:24.860 0ea3371f3 thread: Extended options for spdk_interrupt_register 00:04:24.860 e85295127 util: fix total fds to wait for 00:04:24.860 6e2689c80 util: handle events for vfio fd type 00:04:24.860 e99566256 util: Extended options for spdk_fd_group_add 00:04:24.871 [Pipeline] } 00:04:24.885 [Pipeline] // stage 00:04:24.894 [Pipeline] stage 00:04:24.896 [Pipeline] { (Prepare) 00:04:24.912 [Pipeline] writeFile 00:04:24.927 [Pipeline] sh 00:04:25.212 + logger -p user.info -t JENKINS-CI 00:04:25.225 [Pipeline] sh 00:04:25.510 + logger -p user.info -t JENKINS-CI 00:04:25.522 [Pipeline] sh 00:04:25.809 + cat autorun-spdk.conf 00:04:25.809 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:25.809 SPDK_TEST_FUZZER_SHORT=1 00:04:25.809 SPDK_TEST_FUZZER=1 00:04:25.809 SPDK_TEST_SETUP=1 00:04:25.809 SPDK_RUN_UBSAN=1 00:04:25.816 RUN_NIGHTLY=0 00:04:25.821 [Pipeline] readFile 00:04:25.843 [Pipeline] withEnv 00:04:25.845 [Pipeline] { 00:04:25.856 [Pipeline] sh 00:04:26.142 + set -ex 00:04:26.142 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:04:26.142 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:04:26.142 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.142 ++ SPDK_TEST_FUZZER_SHORT=1 00:04:26.142 ++ SPDK_TEST_FUZZER=1 00:04:26.142 ++ SPDK_TEST_SETUP=1 00:04:26.142 ++ SPDK_RUN_UBSAN=1 00:04:26.142 ++ RUN_NIGHTLY=0 00:04:26.142 + case $SPDK_TEST_NVMF_NICS in 00:04:26.142 + DRIVERS= 00:04:26.142 + [[ -n '' ]] 00:04:26.142 + exit 0 00:04:26.151 [Pipeline] } 00:04:26.165 [Pipeline] // withEnv 00:04:26.169 [Pipeline] } 00:04:26.181 [Pipeline] // stage 00:04:26.190 [Pipeline] catchError 00:04:26.191 [Pipeline] { 00:04:26.203 [Pipeline] timeout 00:04:26.203 Timeout set to expire in 30 min 00:04:26.205 [Pipeline] { 00:04:26.217 [Pipeline] stage 00:04:26.219 [Pipeline] { (Tests) 00:04:26.232 [Pipeline] sh 00:04:26.520 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:26.520 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:26.520 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:04:26.520 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:04:26.520 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:26.520 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:04:26.520 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:04:26.520 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:04:26.520 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:04:26.520 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:04:26.520 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:04:26.520 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:04:26.520 + source /etc/os-release 00:04:26.520 ++ NAME='Fedora Linux' 00:04:26.520 ++ VERSION='39 (Cloud Edition)' 00:04:26.520 ++ ID=fedora 00:04:26.520 ++ VERSION_ID=39 00:04:26.520 ++ VERSION_CODENAME= 00:04:26.520 ++ PLATFORM_ID=platform:f39 00:04:26.520 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:26.520 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:26.520 ++ LOGO=fedora-logo-icon 00:04:26.520 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:26.520 ++ HOME_URL=https://fedoraproject.org/ 00:04:26.520 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:26.520 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:26.520 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:26.520 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:26.520 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:26.520 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:26.520 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:26.520 ++ SUPPORT_END=2024-11-12 00:04:26.520 ++ VARIANT='Cloud Edition' 00:04:26.520 ++ VARIANT_ID=cloud 00:04:26.520 + uname -a 00:04:26.520 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:26.520 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:29.061 Hugepages 00:04:29.061 node hugesize free / total 00:04:29.061 node0 1048576kB 0 / 0 00:04:29.061 node0 2048kB 0 / 0 00:04:29.061 node1 1048576kB 0 / 0 00:04:29.061 node1 2048kB 0 / 0 00:04:29.061 00:04:29.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.061 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.061 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.321 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:29.321 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.321 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.321 + rm -f /tmp/spdk-ld-path 00:04:29.321 + source autorun-spdk.conf 00:04:29.321 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.321 ++ SPDK_TEST_FUZZER_SHORT=1 00:04:29.321 ++ SPDK_TEST_FUZZER=1 00:04:29.321 ++ SPDK_TEST_SETUP=1 00:04:29.321 ++ SPDK_RUN_UBSAN=1 00:04:29.321 ++ RUN_NIGHTLY=0 00:04:29.321 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:29.321 + [[ -n '' ]] 00:04:29.321 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:29.321 + for M in /var/spdk/build-*-manifest.txt 00:04:29.321 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:29.321 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:04:29.321 + for M in /var/spdk/build-*-manifest.txt 00:04:29.321 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:29.321 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:04:29.321 + for M in /var/spdk/build-*-manifest.txt 00:04:29.321 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:29.321 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:04:29.321 ++ uname 00:04:29.321 + [[ Linux == \L\i\n\u\x ]] 00:04:29.321 + sudo dmesg -T 00:04:29.321 + sudo dmesg --clear 00:04:29.321 + dmesg_pid=3604412 00:04:29.321 + [[ Fedora Linux == FreeBSD ]] 00:04:29.321 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.321 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.321 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:29.321 + sudo dmesg -Tw 00:04:29.321 + [[ -x /usr/src/fio-static/fio ]] 00:04:29.321 + export FIO_BIN=/usr/src/fio-static/fio 00:04:29.321 + FIO_BIN=/usr/src/fio-static/fio 00:04:29.321 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:29.321 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:29.321 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:29.321 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.321 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.321 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:29.321 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.321 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.321 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:04:29.582 Test configuration: 00:04:29.582 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.582 SPDK_TEST_FUZZER_SHORT=1 00:04:29.582 SPDK_TEST_FUZZER=1 00:04:29.582 SPDK_TEST_SETUP=1 00:04:29.582 SPDK_RUN_UBSAN=1 00:04:29.582 RUN_NIGHTLY=0 11:04:10 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:29.582 11:04:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:29.582 11:04:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:29.582 11:04:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:29.582 11:04:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.582 11:04:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.582 11:04:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.582 11:04:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.582 11:04:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.582 11:04:10 -- paths/export.sh@5 -- $ export PATH 00:04:29.582 11:04:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.582 11:04:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:29.582 11:04:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:29.582 11:04:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728983050.XXXXXX 00:04:29.582 11:04:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728983050.F0prDw 00:04:29.582 11:04:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:29.582 11:04:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:29.582 11:04:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:04:29.582 11:04:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:29.582 11:04:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:29.582 11:04:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:29.582 11:04:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:29.582 11:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:29.582 11:04:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:29.582 11:04:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:29.582 11:04:10 -- pm/common@17 -- $ local monitor 00:04:29.582 11:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.582 11:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.582 11:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.582 11:04:10 -- pm/common@21 -- $ date +%s 00:04:29.582 11:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.582 11:04:10 -- pm/common@21 -- $ date +%s 00:04:29.582 11:04:10 -- pm/common@25 -- $ sleep 1 00:04:29.582 11:04:10 -- pm/common@21 -- $ date +%s 00:04:29.582 11:04:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728983050 00:04:29.582 11:04:10 -- pm/common@21 -- $ date +%s 00:04:29.582 11:04:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728983050 00:04:29.582 11:04:10 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728983050 00:04:29.582 11:04:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728983050 00:04:29.582 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728983050_collect-cpu-load.pm.log 00:04:29.582 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728983050_collect-vmstat.pm.log 00:04:29.582 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728983050_collect-cpu-temp.pm.log 00:04:29.582 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728983050_collect-bmc-pm.bmc.pm.log 00:04:30.522 11:04:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:30.522 11:04:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:30.522 11:04:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:30.522 11:04:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:30.522 11:04:11 -- spdk/autobuild.sh@16 -- $ date -u 00:04:30.522 Tue Oct 15 09:04:11 AM UTC 2024 00:04:30.522 11:04:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:30.522 v25.01-pre-76-g35c8daa94 00:04:30.522 11:04:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:30.522 11:04:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:30.522 11:04:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:30.522 11:04:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:30.522 11:04:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:30.522 11:04:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.782 ************************************ 00:04:30.782 START TEST ubsan 00:04:30.782 ************************************ 00:04:30.782 11:04:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:30.782 using ubsan 00:04:30.782 00:04:30.782 real 0m0.001s 00:04:30.782 user 0m0.000s 00:04:30.782 sys 0m0.000s 00:04:30.782 11:04:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:30.782 11:04:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:30.782 ************************************ 00:04:30.782 END TEST ubsan 00:04:30.782 ************************************ 00:04:30.782 11:04:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:30.782 11:04:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:30.782 11:04:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:30.782 11:04:11 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:04:30.782 11:04:11 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:04:30.782 11:04:11 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:04:30.782 11:04:11 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:04:30.782 11:04:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:30.782 11:04:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.782 ************************************ 00:04:30.782 START TEST autobuild_llvm_precompile 00:04:30.782 ************************************ 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:04:30.782 Target: x86_64-redhat-linux-gnu 00:04:30.782 Thread model: posix 00:04:30.782 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:04:30.782 11:04:11 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:04:31.042 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:04:31.042 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:31.301 Using 'verbs' RDMA provider 00:04:44.896 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:59.786 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:59.786 Creating mk/config.mk...done. 00:04:59.786 Creating mk/cc.flags.mk...done. 00:04:59.786 Type 'make' to build. 00:04:59.786 00:04:59.786 real 0m27.645s 00:04:59.786 user 0m12.576s 00:04:59.786 sys 0m14.294s 00:04:59.786 11:04:38 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:59.786 11:04:38 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:04:59.786 ************************************ 00:04:59.786 END TEST autobuild_llvm_precompile 00:04:59.786 ************************************ 00:04:59.786 11:04:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:59.786 11:04:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:59.786 11:04:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:59.786 11:04:38 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:04:59.786 11:04:38 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:04:59.786 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:04:59.786 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:59.787 Using 'verbs' RDMA provider 00:05:12.264 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:22.251 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:22.822 Creating mk/config.mk...done. 00:05:22.822 Creating mk/cc.flags.mk...done. 00:05:22.822 Type 'make' to build. 00:05:22.822 11:05:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:05:22.822 11:05:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:22.822 11:05:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:22.822 11:05:03 -- common/autotest_common.sh@10 -- $ set +x 00:05:22.822 ************************************ 00:05:22.822 START TEST make 00:05:22.822 ************************************ 00:05:22.822 11:05:03 make -- common/autotest_common.sh@1125 -- $ make -j72 00:05:23.389 make[1]: Nothing to be done for 'all'. 00:05:24.771 The Meson build system 00:05:24.771 Version: 1.5.0 00:05:24.771 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:05:24.771 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:24.771 Build type: native build 00:05:24.771 Project name: libvfio-user 00:05:24.771 Project version: 0.0.1 00:05:24.771 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:05:24.771 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:05:24.771 Host machine cpu family: x86_64 00:05:24.771 Host machine cpu: x86_64 00:05:24.771 Run-time dependency threads found: YES 00:05:24.771 Library dl found: YES 00:05:24.771 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:24.771 Run-time dependency json-c found: YES 0.17 00:05:24.771 Run-time dependency cmocka found: YES 1.1.7 00:05:24.771 Program pytest-3 found: NO 00:05:24.771 Program flake8 found: NO 00:05:24.771 Program misspell-fixer found: NO 00:05:24.771 Program restructuredtext-lint found: NO 00:05:24.771 Program valgrind found: YES (/usr/bin/valgrind) 00:05:24.771 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:24.771 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:24.771 Compiler for C supports arguments -Wwrite-strings: YES 00:05:24.771 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.771 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:24.771 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:24.771 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.771 Build targets in project: 8 00:05:24.771 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:24.771 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:24.771 00:05:24.771 libvfio-user 0.0.1 00:05:24.771 00:05:24.771 User defined options 00:05:24.771 buildtype : debug 00:05:24.771 default_library: static 00:05:24.771 libdir : /usr/local/lib 00:05:24.771 00:05:24.771 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:25.337 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:25.337 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:25.337 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:05:25.337 [3/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:05:25.337 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:05:25.337 [5/36] Compiling C object samples/null.p/null.c.o 00:05:25.337 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:25.337 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:25.337 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:05:25.337 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:25.337 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:05:25.337 [11/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:25.337 [12/36] Compiling C object samples/server.p/server.c.o 00:05:25.337 [13/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:25.337 [14/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:05:25.337 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:05:25.337 [16/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:25.337 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:05:25.337 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:05:25.337 [19/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:25.337 [20/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:25.337 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:25.337 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:25.337 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:25.337 [24/36] Compiling C object samples/client.p/client.c.o 00:05:25.337 [25/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:25.337 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:25.337 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:05:25.337 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:25.337 [29/36] Linking target samples/client 00:05:25.596 [30/36] Linking static target lib/libvfio-user.a 00:05:25.596 [31/36] Linking target test/unit_tests 00:05:25.596 [32/36] Linking target samples/null 00:05:25.596 [33/36] Linking target samples/server 00:05:25.596 [34/36] Linking target samples/lspci 00:05:25.596 [35/36] Linking target samples/gpio-pci-idio-16 00:05:25.596 [36/36] Linking target samples/shadow_ioeventfd_server 00:05:25.596 INFO: autodetecting backend as ninja 00:05:25.596 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:25.596 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:25.855 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:25.855 ninja: no work to do. 00:05:32.422 The Meson build system 00:05:32.422 Version: 1.5.0 00:05:32.422 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:05:32.422 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:05:32.422 Build type: native build 00:05:32.422 Program cat found: YES (/usr/bin/cat) 00:05:32.422 Project name: DPDK 00:05:32.422 Project version: 24.03.0 00:05:32.422 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:05:32.422 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:05:32.422 Host machine cpu family: x86_64 00:05:32.422 Host machine cpu: x86_64 00:05:32.422 Message: ## Building in Developer Mode ## 00:05:32.422 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:32.422 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:32.422 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:32.422 Program python3 found: YES (/usr/bin/python3) 00:05:32.422 Program cat found: YES (/usr/bin/cat) 00:05:32.422 Compiler for C supports arguments -march=native: YES 00:05:32.422 Checking for size of "void *" : 8 00:05:32.422 Checking for size of "void *" : 8 (cached) 00:05:32.422 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:32.422 Library m found: YES 00:05:32.422 Library numa found: YES 00:05:32.422 Has header "numaif.h" : YES 00:05:32.422 Library fdt found: NO 00:05:32.422 Library execinfo found: NO 00:05:32.422 Has header "execinfo.h" : YES 00:05:32.422 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:32.422 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:32.422 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:32.422 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:32.422 Run-time dependency openssl found: YES 3.1.1 00:05:32.422 Run-time dependency libpcap found: YES 1.10.4 00:05:32.422 Has header "pcap.h" with dependency libpcap: YES 00:05:32.422 Compiler for C supports arguments -Wcast-qual: YES 00:05:32.422 Compiler for C supports arguments -Wdeprecated: YES 00:05:32.422 Compiler for C supports arguments -Wformat: YES 00:05:32.422 Compiler for C supports arguments -Wformat-nonliteral: YES 00:05:32.422 Compiler for C supports arguments -Wformat-security: YES 00:05:32.422 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:32.422 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:32.422 Compiler for C supports arguments -Wnested-externs: YES 00:05:32.422 Compiler for C supports arguments -Wold-style-definition: YES 00:05:32.422 Compiler for C supports arguments -Wpointer-arith: YES 00:05:32.422 Compiler for C supports arguments -Wsign-compare: YES 00:05:32.422 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:32.422 Compiler for C supports arguments -Wundef: YES 00:05:32.422 Compiler for C supports arguments -Wwrite-strings: YES 00:05:32.422 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:32.422 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:05:32.422 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:32.422 Program objdump found: YES (/usr/bin/objdump) 00:05:32.422 Compiler for C supports arguments -mavx512f: YES 00:05:32.422 Checking if "AVX512 checking" compiles: YES 00:05:32.422 Fetching value of define "__SSE4_2__" : 1 00:05:32.422 Fetching value of define "__AES__" : 1 00:05:32.422 Fetching value of define "__AVX__" : 1 00:05:32.422 Fetching value of define "__AVX2__" : 1 00:05:32.422 Fetching value of define "__AVX512BW__" : 1 00:05:32.422 Fetching value of define "__AVX512CD__" : 1 00:05:32.422 Fetching value of define "__AVX512DQ__" : 1 00:05:32.422 Fetching value of define "__AVX512F__" : 1 00:05:32.422 Fetching value of define "__AVX512VL__" : 1 00:05:32.422 Fetching value of define "__PCLMUL__" : 1 00:05:32.422 Fetching value of define "__RDRND__" : 1 00:05:32.422 Fetching value of define "__RDSEED__" : 1 00:05:32.422 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:32.422 Fetching value of define "__znver1__" : (undefined) 00:05:32.422 Fetching value of define "__znver2__" : (undefined) 00:05:32.422 Fetching value of define "__znver3__" : (undefined) 00:05:32.422 Fetching value of define "__znver4__" : (undefined) 00:05:32.422 Compiler for C supports arguments -Wno-format-truncation: NO 00:05:32.422 Message: lib/log: Defining dependency "log" 00:05:32.422 Message: lib/kvargs: Defining dependency "kvargs" 00:05:32.422 Message: lib/telemetry: Defining dependency "telemetry" 00:05:32.422 Checking for function "getentropy" : NO 00:05:32.422 Message: lib/eal: Defining dependency "eal" 00:05:32.422 Message: lib/ring: Defining dependency "ring" 00:05:32.422 Message: lib/rcu: Defining dependency "rcu" 00:05:32.422 Message: lib/mempool: Defining dependency "mempool" 00:05:32.422 Message: lib/mbuf: Defining dependency "mbuf" 00:05:32.422 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:32.422 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:32.422 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:32.422 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:32.422 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:32.422 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:32.422 Compiler for C supports arguments -mpclmul: YES 00:05:32.422 Compiler for C supports arguments -maes: YES 00:05:32.422 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:32.422 Compiler for C supports arguments -mavx512bw: YES 00:05:32.422 Compiler for C supports arguments -mavx512dq: YES 00:05:32.422 Compiler for C supports arguments -mavx512vl: YES 00:05:32.422 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:32.422 Compiler for C supports arguments -mavx2: YES 00:05:32.422 Compiler for C supports arguments -mavx: YES 00:05:32.422 Message: lib/net: Defining dependency "net" 00:05:32.422 Message: lib/meter: Defining dependency "meter" 00:05:32.422 Message: lib/ethdev: Defining dependency "ethdev" 00:05:32.422 Message: lib/pci: Defining dependency "pci" 00:05:32.422 Message: lib/cmdline: Defining dependency "cmdline" 00:05:32.422 Message: lib/hash: Defining dependency "hash" 00:05:32.422 Message: lib/timer: Defining dependency "timer" 00:05:32.422 Message: lib/compressdev: Defining dependency "compressdev" 00:05:32.422 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:32.422 Message: lib/dmadev: Defining dependency "dmadev" 00:05:32.422 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:32.422 Message: lib/power: Defining dependency "power" 00:05:32.422 Message: lib/reorder: Defining dependency "reorder" 00:05:32.422 Message: lib/security: Defining dependency "security" 00:05:32.422 Has header "linux/userfaultfd.h" : YES 00:05:32.422 Has header "linux/vduse.h" : YES 00:05:32.422 Message: lib/vhost: Defining dependency "vhost" 00:05:32.422 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:05:32.422 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:32.422 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:32.422 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:32.422 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:32.422 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:32.422 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:32.422 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:32.422 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:32.422 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:32.422 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:32.422 Configuring doxy-api-html.conf using configuration 00:05:32.422 Configuring doxy-api-man.conf using configuration 00:05:32.423 Program mandb found: YES (/usr/bin/mandb) 00:05:32.423 Program sphinx-build found: NO 00:05:32.423 Configuring rte_build_config.h using configuration 00:05:32.423 Message: 00:05:32.423 ================= 00:05:32.423 Applications Enabled 00:05:32.423 ================= 00:05:32.423 00:05:32.423 apps: 00:05:32.423 00:05:32.423 00:05:32.423 Message: 00:05:32.423 ================= 00:05:32.423 Libraries Enabled 00:05:32.423 ================= 00:05:32.423 00:05:32.423 libs: 00:05:32.423 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:32.423 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:32.423 cryptodev, dmadev, power, reorder, security, vhost, 00:05:32.423 00:05:32.423 Message: 00:05:32.423 =============== 00:05:32.423 Drivers Enabled 00:05:32.423 =============== 00:05:32.423 00:05:32.423 common: 00:05:32.423 00:05:32.423 bus: 00:05:32.423 pci, vdev, 00:05:32.423 mempool: 00:05:32.423 ring, 00:05:32.423 dma: 00:05:32.423 00:05:32.423 net: 00:05:32.423 00:05:32.423 crypto: 00:05:32.423 00:05:32.423 compress: 00:05:32.423 00:05:32.423 vdpa: 00:05:32.423 00:05:32.423 00:05:32.423 Message: 00:05:32.423 ================= 00:05:32.423 Content Skipped 00:05:32.423 ================= 00:05:32.423 00:05:32.423 apps: 00:05:32.423 dumpcap: explicitly disabled via build config 00:05:32.423 graph: explicitly disabled via build config 00:05:32.423 pdump: explicitly disabled via build config 00:05:32.423 proc-info: explicitly disabled via build config 00:05:32.423 test-acl: explicitly disabled via build config 00:05:32.423 test-bbdev: explicitly disabled via build config 00:05:32.423 test-cmdline: explicitly disabled via build config 00:05:32.423 test-compress-perf: explicitly disabled via build config 00:05:32.423 test-crypto-perf: explicitly disabled via build config 00:05:32.423 test-dma-perf: explicitly disabled via build config 00:05:32.423 test-eventdev: explicitly disabled via build config 00:05:32.423 test-fib: explicitly disabled via build config 00:05:32.423 test-flow-perf: explicitly disabled via build config 00:05:32.423 test-gpudev: explicitly disabled via build config 00:05:32.423 test-mldev: explicitly disabled via build config 00:05:32.423 test-pipeline: explicitly disabled via build config 00:05:32.423 test-pmd: explicitly disabled via build config 00:05:32.423 test-regex: explicitly disabled via build config 00:05:32.423 test-sad: explicitly disabled via build config 00:05:32.423 test-security-perf: explicitly disabled via build config 00:05:32.423 00:05:32.423 libs: 00:05:32.423 argparse: explicitly disabled via build config 00:05:32.423 metrics: explicitly disabled via build config 00:05:32.423 acl: explicitly disabled via build config 00:05:32.423 bbdev: explicitly disabled via build config 00:05:32.423 bitratestats: explicitly disabled via build config 00:05:32.423 bpf: explicitly disabled via build config 00:05:32.423 cfgfile: explicitly disabled via build config 00:05:32.423 distributor: explicitly disabled via build config 00:05:32.423 efd: explicitly disabled via build config 00:05:32.423 eventdev: explicitly disabled via build config 00:05:32.423 dispatcher: explicitly disabled via build config 00:05:32.423 gpudev: explicitly disabled via build config 00:05:32.423 gro: explicitly disabled via build config 00:05:32.423 gso: explicitly disabled via build config 00:05:32.423 ip_frag: explicitly disabled via build config 00:05:32.423 jobstats: explicitly disabled via build config 00:05:32.423 latencystats: explicitly disabled via build config 00:05:32.423 lpm: explicitly disabled via build config 00:05:32.423 member: explicitly disabled via build config 00:05:32.423 pcapng: explicitly disabled via build config 00:05:32.423 rawdev: explicitly disabled via build config 00:05:32.423 regexdev: explicitly disabled via build config 00:05:32.423 mldev: explicitly disabled via build config 00:05:32.423 rib: explicitly disabled via build config 00:05:32.423 sched: explicitly disabled via build config 00:05:32.423 stack: explicitly disabled via build config 00:05:32.423 ipsec: explicitly disabled via build config 00:05:32.423 pdcp: explicitly disabled via build config 00:05:32.423 fib: explicitly disabled via build config 00:05:32.423 port: explicitly disabled via build config 00:05:32.423 pdump: explicitly disabled via build config 00:05:32.423 table: explicitly disabled via build config 00:05:32.423 pipeline: explicitly disabled via build config 00:05:32.423 graph: explicitly disabled via build config 00:05:32.423 node: explicitly disabled via build config 00:05:32.423 00:05:32.423 drivers: 00:05:32.423 common/cpt: not in enabled drivers build config 00:05:32.423 common/dpaax: not in enabled drivers build config 00:05:32.423 common/iavf: not in enabled drivers build config 00:05:32.423 common/idpf: not in enabled drivers build config 00:05:32.423 common/ionic: not in enabled drivers build config 00:05:32.423 common/mvep: not in enabled drivers build config 00:05:32.423 common/octeontx: not in enabled drivers build config 00:05:32.423 bus/auxiliary: not in enabled drivers build config 00:05:32.423 bus/cdx: not in enabled drivers build config 00:05:32.423 bus/dpaa: not in enabled drivers build config 00:05:32.423 bus/fslmc: not in enabled drivers build config 00:05:32.423 bus/ifpga: not in enabled drivers build config 00:05:32.423 bus/platform: not in enabled drivers build config 00:05:32.423 bus/uacce: not in enabled drivers build config 00:05:32.423 bus/vmbus: not in enabled drivers build config 00:05:32.423 common/cnxk: not in enabled drivers build config 00:05:32.423 common/mlx5: not in enabled drivers build config 00:05:32.423 common/nfp: not in enabled drivers build config 00:05:32.423 common/nitrox: not in enabled drivers build config 00:05:32.423 common/qat: not in enabled drivers build config 00:05:32.423 common/sfc_efx: not in enabled drivers build config 00:05:32.423 mempool/bucket: not in enabled drivers build config 00:05:32.423 mempool/cnxk: not in enabled drivers build config 00:05:32.423 mempool/dpaa: not in enabled drivers build config 00:05:32.423 mempool/dpaa2: not in enabled drivers build config 00:05:32.423 mempool/octeontx: not in enabled drivers build config 00:05:32.423 mempool/stack: not in enabled drivers build config 00:05:32.423 dma/cnxk: not in enabled drivers build config 00:05:32.423 dma/dpaa: not in enabled drivers build config 00:05:32.423 dma/dpaa2: not in enabled drivers build config 00:05:32.423 dma/hisilicon: not in enabled drivers build config 00:05:32.423 dma/idxd: not in enabled drivers build config 00:05:32.423 dma/ioat: not in enabled drivers build config 00:05:32.423 dma/skeleton: not in enabled drivers build config 00:05:32.423 net/af_packet: not in enabled drivers build config 00:05:32.423 net/af_xdp: not in enabled drivers build config 00:05:32.423 net/ark: not in enabled drivers build config 00:05:32.423 net/atlantic: not in enabled drivers build config 00:05:32.423 net/avp: not in enabled drivers build config 00:05:32.423 net/axgbe: not in enabled drivers build config 00:05:32.423 net/bnx2x: not in enabled drivers build config 00:05:32.423 net/bnxt: not in enabled drivers build config 00:05:32.423 net/bonding: not in enabled drivers build config 00:05:32.423 net/cnxk: not in enabled drivers build config 00:05:32.423 net/cpfl: not in enabled drivers build config 00:05:32.423 net/cxgbe: not in enabled drivers build config 00:05:32.423 net/dpaa: not in enabled drivers build config 00:05:32.423 net/dpaa2: not in enabled drivers build config 00:05:32.423 net/e1000: not in enabled drivers build config 00:05:32.423 net/ena: not in enabled drivers build config 00:05:32.423 net/enetc: not in enabled drivers build config 00:05:32.423 net/enetfec: not in enabled drivers build config 00:05:32.423 net/enic: not in enabled drivers build config 00:05:32.423 net/failsafe: not in enabled drivers build config 00:05:32.423 net/fm10k: not in enabled drivers build config 00:05:32.423 net/gve: not in enabled drivers build config 00:05:32.423 net/hinic: not in enabled drivers build config 00:05:32.423 net/hns3: not in enabled drivers build config 00:05:32.423 net/i40e: not in enabled drivers build config 00:05:32.423 net/iavf: not in enabled drivers build config 00:05:32.423 net/ice: not in enabled drivers build config 00:05:32.423 net/idpf: not in enabled drivers build config 00:05:32.423 net/igc: not in enabled drivers build config 00:05:32.423 net/ionic: not in enabled drivers build config 00:05:32.423 net/ipn3ke: not in enabled drivers build config 00:05:32.423 net/ixgbe: not in enabled drivers build config 00:05:32.423 net/mana: not in enabled drivers build config 00:05:32.423 net/memif: not in enabled drivers build config 00:05:32.423 net/mlx4: not in enabled drivers build config 00:05:32.423 net/mlx5: not in enabled drivers build config 00:05:32.423 net/mvneta: not in enabled drivers build config 00:05:32.423 net/mvpp2: not in enabled drivers build config 00:05:32.423 net/netvsc: not in enabled drivers build config 00:05:32.423 net/nfb: not in enabled drivers build config 00:05:32.423 net/nfp: not in enabled drivers build config 00:05:32.423 net/ngbe: not in enabled drivers build config 00:05:32.423 net/null: not in enabled drivers build config 00:05:32.423 net/octeontx: not in enabled drivers build config 00:05:32.423 net/octeon_ep: not in enabled drivers build config 00:05:32.423 net/pcap: not in enabled drivers build config 00:05:32.423 net/pfe: not in enabled drivers build config 00:05:32.423 net/qede: not in enabled drivers build config 00:05:32.423 net/ring: not in enabled drivers build config 00:05:32.423 net/sfc: not in enabled drivers build config 00:05:32.423 net/softnic: not in enabled drivers build config 00:05:32.423 net/tap: not in enabled drivers build config 00:05:32.423 net/thunderx: not in enabled drivers build config 00:05:32.423 net/txgbe: not in enabled drivers build config 00:05:32.423 net/vdev_netvsc: not in enabled drivers build config 00:05:32.423 net/vhost: not in enabled drivers build config 00:05:32.423 net/virtio: not in enabled drivers build config 00:05:32.423 net/vmxnet3: not in enabled drivers build config 00:05:32.423 raw/*: missing internal dependency, "rawdev" 00:05:32.423 crypto/armv8: not in enabled drivers build config 00:05:32.423 crypto/bcmfs: not in enabled drivers build config 00:05:32.423 crypto/caam_jr: not in enabled drivers build config 00:05:32.423 crypto/ccp: not in enabled drivers build config 00:05:32.423 crypto/cnxk: not in enabled drivers build config 00:05:32.423 crypto/dpaa_sec: not in enabled drivers build config 00:05:32.423 crypto/dpaa2_sec: not in enabled drivers build config 00:05:32.423 crypto/ipsec_mb: not in enabled drivers build config 00:05:32.423 crypto/mlx5: not in enabled drivers build config 00:05:32.423 crypto/mvsam: not in enabled drivers build config 00:05:32.423 crypto/nitrox: not in enabled drivers build config 00:05:32.423 crypto/null: not in enabled drivers build config 00:05:32.424 crypto/octeontx: not in enabled drivers build config 00:05:32.424 crypto/openssl: not in enabled drivers build config 00:05:32.424 crypto/scheduler: not in enabled drivers build config 00:05:32.424 crypto/uadk: not in enabled drivers build config 00:05:32.424 crypto/virtio: not in enabled drivers build config 00:05:32.424 compress/isal: not in enabled drivers build config 00:05:32.424 compress/mlx5: not in enabled drivers build config 00:05:32.424 compress/nitrox: not in enabled drivers build config 00:05:32.424 compress/octeontx: not in enabled drivers build config 00:05:32.424 compress/zlib: not in enabled drivers build config 00:05:32.424 regex/*: missing internal dependency, "regexdev" 00:05:32.424 ml/*: missing internal dependency, "mldev" 00:05:32.424 vdpa/ifc: not in enabled drivers build config 00:05:32.424 vdpa/mlx5: not in enabled drivers build config 00:05:32.424 vdpa/nfp: not in enabled drivers build config 00:05:32.424 vdpa/sfc: not in enabled drivers build config 00:05:32.424 event/*: missing internal dependency, "eventdev" 00:05:32.424 baseband/*: missing internal dependency, "bbdev" 00:05:32.424 gpu/*: missing internal dependency, "gpudev" 00:05:32.424 00:05:32.424 00:05:32.424 Build targets in project: 85 00:05:32.424 00:05:32.424 DPDK 24.03.0 00:05:32.424 00:05:32.424 User defined options 00:05:32.424 buildtype : debug 00:05:32.424 default_library : static 00:05:32.424 libdir : lib 00:05:32.424 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:05:32.424 c_args : -fPIC -Werror 00:05:32.424 c_link_args : 00:05:32.424 cpu_instruction_set: native 00:05:32.424 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:32.424 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:32.424 enable_docs : false 00:05:32.424 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:32.424 enable_kmods : false 00:05:32.424 max_lcores : 128 00:05:32.424 tests : false 00:05:32.424 00:05:32.424 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:32.424 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:05:32.424 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:32.424 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:32.424 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:32.424 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:32.424 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:32.424 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:32.424 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:32.424 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:32.424 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:32.424 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:32.424 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:32.424 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:32.424 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:32.424 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:32.424 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:32.424 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:32.424 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:32.424 [18/268] Linking static target lib/librte_kvargs.a 00:05:32.424 [19/268] Linking static target lib/librte_log.a 00:05:32.687 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:32.687 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:32.687 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:32.687 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:32.687 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:32.687 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:32.687 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:32.687 [27/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:32.687 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:32.687 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:32.687 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:32.687 [31/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:32.687 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:32.687 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:32.687 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:32.687 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:32.687 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:32.687 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:32.687 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:32.687 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:32.687 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:32.687 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:32.687 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:32.687 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:32.687 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:32.687 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:32.687 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:32.687 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:32.687 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:32.687 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:32.687 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:32.687 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:32.687 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:32.687 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:32.687 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:32.687 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:32.687 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:32.687 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:32.687 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:32.687 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:32.948 [60/268] Linking static target lib/librte_telemetry.a 00:05:32.948 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:32.948 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:32.948 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:32.948 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:32.948 [65/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:32.948 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:32.948 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:32.948 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:32.948 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:32.948 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:32.948 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:32.948 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:32.948 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:32.948 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:32.948 [75/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:32.948 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:32.948 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:32.948 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:32.948 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:32.948 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:32.948 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:32.948 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:32.948 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:32.948 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:32.948 [85/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.948 [86/268] Linking static target lib/librte_pci.a 00:05:32.948 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:32.948 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:32.948 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:32.948 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:32.948 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:32.948 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:32.948 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:32.948 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:32.948 [95/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:32.948 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:32.948 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:32.948 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:32.948 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:32.948 [100/268] Linking static target lib/librte_ring.a 00:05:32.948 [101/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:32.948 [102/268] Linking static target lib/librte_eal.a 00:05:32.948 [103/268] Linking static target lib/librte_rcu.a 00:05:32.948 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:32.948 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:32.948 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:32.948 [107/268] Linking static target lib/librte_mempool.a 00:05:32.948 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:32.948 [109/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:32.948 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:33.208 [111/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.208 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:33.208 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:33.208 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:33.208 [115/268] Linking static target lib/librte_mbuf.a 00:05:33.208 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:33.208 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.208 [118/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.208 [119/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:33.467 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:33.467 [121/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:33.467 [122/268] Linking target lib/librte_log.so.24.1 00:05:33.467 [123/268] Linking static target lib/librte_net.a 00:05:33.467 [124/268] Linking static target lib/librte_meter.a 00:05:33.467 [125/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.467 [126/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.467 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:33.467 [128/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:33.467 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:33.467 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:33.467 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:33.467 [132/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.467 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:33.467 [134/268] Linking static target lib/librte_timer.a 00:05:33.467 [135/268] Linking static target lib/librte_cmdline.a 00:05:33.467 [136/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:33.467 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:33.467 [138/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:33.467 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:33.467 [140/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:33.467 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:33.467 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:33.467 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:33.467 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:33.467 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:33.467 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:33.467 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:33.467 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:33.467 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:33.467 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:33.467 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:33.467 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:33.467 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:33.467 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:33.467 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:33.467 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:33.467 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:33.467 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:33.467 [159/268] Linking static target lib/librte_dmadev.a 00:05:33.467 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:33.467 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:33.467 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:33.467 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:33.467 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:33.726 [165/268] Linking static target lib/librte_compressdev.a 00:05:33.726 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:33.726 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:33.726 [168/268] Linking target lib/librte_kvargs.so.24.1 00:05:33.726 [169/268] Linking target lib/librte_telemetry.so.24.1 00:05:33.726 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:33.726 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:33.726 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:33.726 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:33.726 [174/268] Linking static target lib/librte_reorder.a 00:05:33.726 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:33.726 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:33.726 [177/268] Linking static target lib/librte_power.a 00:05:33.726 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:33.726 [179/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.726 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.726 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:33.726 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:33.726 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:33.726 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:33.726 [185/268] Linking static target lib/librte_security.a 00:05:33.726 [186/268] Linking static target lib/librte_hash.a 00:05:33.726 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:33.726 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:33.726 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:33.726 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:33.726 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:33.727 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:33.727 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:33.727 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:33.727 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:33.727 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:33.727 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.727 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:33.727 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:33.727 [200/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.727 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:33.727 [202/268] Linking static target lib/librte_cryptodev.a 00:05:33.727 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.727 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:33.727 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:33.985 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:33.985 [207/268] Linking static target drivers/librte_bus_vdev.a 00:05:33.985 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:33.985 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:33.985 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:33.985 [211/268] Linking static target drivers/librte_bus_pci.a 00:05:33.985 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:33.985 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:33.985 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:33.985 [215/268] Linking static target drivers/librte_mempool_ring.a 00:05:33.985 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.985 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:34.243 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:34.243 [219/268] Linking static target lib/librte_ethdev.a 00:05:34.243 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.243 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.243 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.243 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.501 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.759 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.759 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.759 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:34.759 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.759 [229/268] Linking static target lib/librte_vhost.a 00:05:36.131 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.702 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.396 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.298 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.298 [234/268] Linking target lib/librte_eal.so.24.1 00:05:45.298 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:45.298 [236/268] Linking target lib/librte_pci.so.24.1 00:05:45.298 [237/268] Linking target lib/librte_timer.so.24.1 00:05:45.298 [238/268] Linking target lib/librte_meter.so.24.1 00:05:45.298 [239/268] Linking target lib/librte_ring.so.24.1 00:05:45.298 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:45.298 [241/268] Linking target lib/librte_dmadev.so.24.1 00:05:45.555 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:45.555 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:45.555 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:45.555 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:45.555 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:45.555 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:45.555 [248/268] Linking target lib/librte_mempool.so.24.1 00:05:45.555 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:45.555 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:45.555 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:45.812 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:45.812 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:45.812 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:46.069 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:46.069 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:46.069 [257/268] Linking target lib/librte_net.so.24.1 00:05:46.069 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:46.069 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:46.069 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:46.069 [261/268] Linking target lib/librte_hash.so.24.1 00:05:46.069 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:46.069 [263/268] Linking target lib/librte_cmdline.so.24.1 00:05:46.069 [264/268] Linking target lib/librte_security.so.24.1 00:05:46.327 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:46.327 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:46.327 [267/268] Linking target lib/librte_power.so.24.1 00:05:46.327 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:46.327 INFO: autodetecting backend as ninja 00:05:46.327 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:05:47.260 CC lib/ut_mock/mock.o 00:05:47.260 CC lib/log/log.o 00:05:47.260 CC lib/log/log_flags.o 00:05:47.260 CC lib/log/log_deprecated.o 00:05:47.260 CC lib/ut/ut.o 00:05:47.517 LIB libspdk_ut_mock.a 00:05:47.517 LIB libspdk_ut.a 00:05:47.517 LIB libspdk_log.a 00:05:47.774 CC lib/util/base64.o 00:05:47.774 CC lib/util/crc32c.o 00:05:47.774 CC lib/util/bit_array.o 00:05:47.774 CC lib/util/cpuset.o 00:05:47.774 CC lib/util/crc64.o 00:05:47.774 CC lib/util/crc16.o 00:05:47.774 CC lib/util/crc32.o 00:05:47.774 CC lib/util/crc32_ieee.o 00:05:47.774 CC lib/util/dif.o 00:05:47.774 CC lib/util/fd.o 00:05:47.774 CC lib/util/fd_group.o 00:05:47.774 CC lib/util/file.o 00:05:47.774 CC lib/util/hexlify.o 00:05:47.774 CC lib/util/iov.o 00:05:47.774 CC lib/util/math.o 00:05:47.774 CC lib/util/net.o 00:05:47.774 CC lib/util/pipe.o 00:05:47.774 CC lib/util/strerror_tls.o 00:05:47.774 CC lib/util/string.o 00:05:47.774 CC lib/util/uuid.o 00:05:47.774 CC lib/util/xor.o 00:05:47.774 CC lib/util/zipf.o 00:05:47.774 CC lib/util/md5.o 00:05:47.774 CC lib/dma/dma.o 00:05:47.774 CXX lib/trace_parser/trace.o 00:05:47.774 CC lib/ioat/ioat.o 00:05:47.774 CC lib/vfio_user/host/vfio_user_pci.o 00:05:47.774 CC lib/vfio_user/host/vfio_user.o 00:05:47.774 LIB libspdk_dma.a 00:05:48.032 LIB libspdk_ioat.a 00:05:48.032 LIB libspdk_vfio_user.a 00:05:48.032 LIB libspdk_util.a 00:05:48.290 LIB libspdk_trace_parser.a 00:05:48.290 CC lib/rdma_provider/common.o 00:05:48.290 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:48.290 CC lib/rdma_utils/rdma_utils.o 00:05:48.290 CC lib/conf/conf.o 00:05:48.290 CC lib/json/json_parse.o 00:05:48.290 CC lib/json/json_util.o 00:05:48.290 CC lib/json/json_write.o 00:05:48.290 CC lib/vmd/vmd.o 00:05:48.290 CC lib/vmd/led.o 00:05:48.290 CC lib/env_dpdk/memory.o 00:05:48.290 CC lib/env_dpdk/env.o 00:05:48.290 CC lib/idxd/idxd_kernel.o 00:05:48.290 CC lib/idxd/idxd.o 00:05:48.290 CC lib/env_dpdk/pci.o 00:05:48.290 CC lib/idxd/idxd_user.o 00:05:48.290 CC lib/env_dpdk/threads.o 00:05:48.290 CC lib/env_dpdk/init.o 00:05:48.290 CC lib/env_dpdk/pci_ioat.o 00:05:48.290 CC lib/env_dpdk/pci_virtio.o 00:05:48.290 CC lib/env_dpdk/pci_vmd.o 00:05:48.290 CC lib/env_dpdk/pci_idxd.o 00:05:48.290 CC lib/env_dpdk/pci_event.o 00:05:48.290 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:48.290 CC lib/env_dpdk/sigbus_handler.o 00:05:48.290 CC lib/env_dpdk/pci_dpdk.o 00:05:48.290 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:48.548 LIB libspdk_rdma_provider.a 00:05:48.548 LIB libspdk_conf.a 00:05:48.548 LIB libspdk_rdma_utils.a 00:05:48.548 LIB libspdk_json.a 00:05:48.806 LIB libspdk_idxd.a 00:05:48.806 LIB libspdk_vmd.a 00:05:48.806 CC lib/jsonrpc/jsonrpc_server.o 00:05:48.806 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:48.806 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:48.806 CC lib/jsonrpc/jsonrpc_client.o 00:05:49.064 LIB libspdk_jsonrpc.a 00:05:49.322 CC lib/rpc/rpc.o 00:05:49.322 LIB libspdk_env_dpdk.a 00:05:49.580 LIB libspdk_rpc.a 00:05:49.838 CC lib/trace/trace.o 00:05:49.838 CC lib/trace/trace_flags.o 00:05:49.838 CC lib/trace/trace_rpc.o 00:05:49.838 CC lib/notify/notify.o 00:05:49.838 CC lib/notify/notify_rpc.o 00:05:49.838 CC lib/keyring/keyring_rpc.o 00:05:49.838 CC lib/keyring/keyring.o 00:05:49.838 LIB libspdk_notify.a 00:05:49.838 LIB libspdk_trace.a 00:05:49.838 LIB libspdk_keyring.a 00:05:50.096 CC lib/sock/sock.o 00:05:50.096 CC lib/sock/sock_rpc.o 00:05:50.096 CC lib/thread/thread.o 00:05:50.096 CC lib/thread/iobuf.o 00:05:50.354 LIB libspdk_sock.a 00:05:50.612 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:50.871 CC lib/nvme/nvme_ctrlr.o 00:05:50.871 CC lib/nvme/nvme_fabric.o 00:05:50.871 CC lib/nvme/nvme_ns_cmd.o 00:05:50.871 CC lib/nvme/nvme_ns.o 00:05:50.871 CC lib/nvme/nvme_pcie_common.o 00:05:50.871 CC lib/nvme/nvme_qpair.o 00:05:50.871 CC lib/nvme/nvme_pcie.o 00:05:50.871 CC lib/nvme/nvme.o 00:05:50.871 CC lib/nvme/nvme_quirks.o 00:05:50.871 CC lib/nvme/nvme_transport.o 00:05:50.871 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:50.871 CC lib/nvme/nvme_discovery.o 00:05:50.871 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:50.871 CC lib/nvme/nvme_io_msg.o 00:05:50.871 CC lib/nvme/nvme_tcp.o 00:05:50.871 CC lib/nvme/nvme_opal.o 00:05:50.871 CC lib/nvme/nvme_poll_group.o 00:05:50.871 CC lib/nvme/nvme_zns.o 00:05:50.871 CC lib/nvme/nvme_vfio_user.o 00:05:50.871 CC lib/nvme/nvme_stubs.o 00:05:50.871 CC lib/nvme/nvme_auth.o 00:05:50.871 CC lib/nvme/nvme_cuse.o 00:05:50.871 CC lib/nvme/nvme_rdma.o 00:05:50.871 LIB libspdk_thread.a 00:05:51.130 CC lib/blob/zeroes.o 00:05:51.130 CC lib/blob/blob_bs_dev.o 00:05:51.130 CC lib/blob/blobstore.o 00:05:51.130 CC lib/blob/request.o 00:05:51.130 CC lib/fsdev/fsdev_io.o 00:05:51.130 CC lib/virtio/virtio_vhost_user.o 00:05:51.130 CC lib/fsdev/fsdev.o 00:05:51.130 CC lib/virtio/virtio.o 00:05:51.130 CC lib/virtio/virtio_vfio_user.o 00:05:51.130 CC lib/fsdev/fsdev_rpc.o 00:05:51.130 CC lib/virtio/virtio_pci.o 00:05:51.130 CC lib/init/json_config.o 00:05:51.130 CC lib/init/rpc.o 00:05:51.130 CC lib/init/subsystem_rpc.o 00:05:51.130 CC lib/init/subsystem.o 00:05:51.389 CC lib/accel/accel_rpc.o 00:05:51.389 CC lib/accel/accel.o 00:05:51.389 CC lib/accel/accel_sw.o 00:05:51.389 CC lib/vfu_tgt/tgt_rpc.o 00:05:51.389 CC lib/vfu_tgt/tgt_endpoint.o 00:05:51.389 LIB libspdk_init.a 00:05:51.389 LIB libspdk_virtio.a 00:05:51.389 LIB libspdk_vfu_tgt.a 00:05:51.648 LIB libspdk_fsdev.a 00:05:51.648 CC lib/event/app.o 00:05:51.648 CC lib/event/reactor.o 00:05:51.648 CC lib/event/log_rpc.o 00:05:51.648 CC lib/event/app_rpc.o 00:05:51.648 CC lib/event/scheduler_static.o 00:05:51.907 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:51.907 LIB libspdk_event.a 00:05:52.166 LIB libspdk_accel.a 00:05:52.166 LIB libspdk_nvme.a 00:05:52.427 LIB libspdk_fuse_dispatcher.a 00:05:52.427 CC lib/bdev/bdev.o 00:05:52.427 CC lib/bdev/bdev_rpc.o 00:05:52.427 CC lib/bdev/scsi_nvme.o 00:05:52.427 CC lib/bdev/bdev_zone.o 00:05:52.427 CC lib/bdev/part.o 00:05:52.993 LIB libspdk_blob.a 00:05:53.252 CC lib/lvol/lvol.o 00:05:53.252 CC lib/blobfs/blobfs.o 00:05:53.252 CC lib/blobfs/tree.o 00:05:53.819 LIB libspdk_lvol.a 00:05:53.819 LIB libspdk_blobfs.a 00:05:54.079 LIB libspdk_bdev.a 00:05:54.344 CC lib/nbd/nbd.o 00:05:54.344 CC lib/nbd/nbd_rpc.o 00:05:54.344 CC lib/ublk/ublk.o 00:05:54.344 CC lib/ublk/ublk_rpc.o 00:05:54.344 CC lib/scsi/dev.o 00:05:54.344 CC lib/nvmf/ctrlr.o 00:05:54.344 CC lib/nvmf/ctrlr_discovery.o 00:05:54.344 CC lib/scsi/scsi.o 00:05:54.344 CC lib/scsi/lun.o 00:05:54.344 CC lib/scsi/port.o 00:05:54.344 CC lib/nvmf/ctrlr_bdev.o 00:05:54.344 CC lib/ftl/ftl_layout.o 00:05:54.344 CC lib/nvmf/subsystem.o 00:05:54.344 CC lib/ftl/ftl_core.o 00:05:54.344 CC lib/ftl/ftl_debug.o 00:05:54.344 CC lib/nvmf/nvmf.o 00:05:54.344 CC lib/ftl/ftl_init.o 00:05:54.344 CC lib/scsi/scsi_bdev.o 00:05:54.344 CC lib/nvmf/nvmf_rpc.o 00:05:54.344 CC lib/nvmf/transport.o 00:05:54.344 CC lib/scsi/scsi_pr.o 00:05:54.344 CC lib/scsi/scsi_rpc.o 00:05:54.344 CC lib/nvmf/stubs.o 00:05:54.344 CC lib/nvmf/tcp.o 00:05:54.344 CC lib/ftl/ftl_io.o 00:05:54.344 CC lib/scsi/task.o 00:05:54.344 CC lib/ftl/ftl_sb.o 00:05:54.344 CC lib/nvmf/mdns_server.o 00:05:54.344 CC lib/ftl/ftl_l2p.o 00:05:54.344 CC lib/nvmf/vfio_user.o 00:05:54.344 CC lib/ftl/ftl_l2p_flat.o 00:05:54.344 CC lib/ftl/ftl_nv_cache.o 00:05:54.344 CC lib/nvmf/rdma.o 00:05:54.344 CC lib/nvmf/auth.o 00:05:54.344 CC lib/ftl/ftl_band.o 00:05:54.344 CC lib/ftl/ftl_band_ops.o 00:05:54.344 CC lib/ftl/ftl_reloc.o 00:05:54.344 CC lib/ftl/ftl_writer.o 00:05:54.344 CC lib/ftl/ftl_rq.o 00:05:54.344 CC lib/ftl/ftl_p2l.o 00:05:54.344 CC lib/ftl/ftl_l2p_cache.o 00:05:54.344 CC lib/ftl/ftl_p2l_log.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:54.344 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:54.344 CC lib/ftl/utils/ftl_conf.o 00:05:54.344 CC lib/ftl/utils/ftl_md.o 00:05:54.344 CC lib/ftl/utils/ftl_mempool.o 00:05:54.344 CC lib/ftl/utils/ftl_bitmap.o 00:05:54.344 CC lib/ftl/utils/ftl_property.o 00:05:54.344 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:54.344 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:54.344 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:54.344 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:54.344 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:54.344 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:54.344 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:54.602 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:54.602 CC lib/ftl/base/ftl_base_dev.o 00:05:54.602 CC lib/ftl/base/ftl_base_bdev.o 00:05:54.602 CC lib/ftl/ftl_trace.o 00:05:54.861 LIB libspdk_nbd.a 00:05:54.861 LIB libspdk_scsi.a 00:05:54.861 LIB libspdk_ublk.a 00:05:55.120 CC lib/iscsi/conn.o 00:05:55.120 CC lib/iscsi/init_grp.o 00:05:55.120 CC lib/iscsi/iscsi.o 00:05:55.120 CC lib/iscsi/param.o 00:05:55.120 CC lib/iscsi/portal_grp.o 00:05:55.120 CC lib/iscsi/tgt_node.o 00:05:55.120 CC lib/iscsi/task.o 00:05:55.120 CC lib/iscsi/iscsi_subsystem.o 00:05:55.120 CC lib/iscsi/iscsi_rpc.o 00:05:55.120 LIB libspdk_ftl.a 00:05:55.120 CC lib/vhost/vhost.o 00:05:55.120 CC lib/vhost/vhost_rpc.o 00:05:55.120 CC lib/vhost/vhost_scsi.o 00:05:55.120 CC lib/vhost/vhost_blk.o 00:05:55.120 CC lib/vhost/rte_vhost_user.o 00:05:55.687 LIB libspdk_nvmf.a 00:05:55.946 LIB libspdk_vhost.a 00:05:55.946 LIB libspdk_iscsi.a 00:05:56.516 CC module/env_dpdk/env_dpdk_rpc.o 00:05:56.516 CC module/vfu_device/vfu_virtio.o 00:05:56.516 CC module/vfu_device/vfu_virtio_scsi.o 00:05:56.516 CC module/vfu_device/vfu_virtio_blk.o 00:05:56.516 CC module/vfu_device/vfu_virtio_fs.o 00:05:56.516 CC module/vfu_device/vfu_virtio_rpc.o 00:05:56.516 CC module/fsdev/aio/fsdev_aio.o 00:05:56.516 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:56.516 CC module/fsdev/aio/linux_aio_mgr.o 00:05:56.516 CC module/sock/posix/posix.o 00:05:56.516 CC module/accel/ioat/accel_ioat.o 00:05:56.516 LIB libspdk_env_dpdk_rpc.a 00:05:56.516 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:56.516 CC module/accel/ioat/accel_ioat_rpc.o 00:05:56.516 CC module/blob/bdev/blob_bdev.o 00:05:56.516 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:56.516 CC module/keyring/file/keyring.o 00:05:56.516 CC module/accel/iaa/accel_iaa.o 00:05:56.516 CC module/keyring/file/keyring_rpc.o 00:05:56.516 CC module/accel/iaa/accel_iaa_rpc.o 00:05:56.516 CC module/keyring/linux/keyring.o 00:05:56.516 CC module/keyring/linux/keyring_rpc.o 00:05:56.516 CC module/accel/dsa/accel_dsa.o 00:05:56.516 CC module/accel/dsa/accel_dsa_rpc.o 00:05:56.516 CC module/accel/error/accel_error.o 00:05:56.516 CC module/accel/error/accel_error_rpc.o 00:05:56.516 CC module/scheduler/gscheduler/gscheduler.o 00:05:56.775 LIB libspdk_keyring_file.a 00:05:56.775 LIB libspdk_scheduler_dpdk_governor.a 00:05:56.775 LIB libspdk_scheduler_dynamic.a 00:05:56.775 LIB libspdk_keyring_linux.a 00:05:56.775 LIB libspdk_accel_ioat.a 00:05:56.775 LIB libspdk_scheduler_gscheduler.a 00:05:56.775 LIB libspdk_accel_iaa.a 00:05:56.775 LIB libspdk_accel_error.a 00:05:56.775 LIB libspdk_blob_bdev.a 00:05:56.775 LIB libspdk_accel_dsa.a 00:05:56.775 LIB libspdk_vfu_device.a 00:05:57.033 LIB libspdk_sock_posix.a 00:05:57.033 LIB libspdk_fsdev_aio.a 00:05:57.033 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:57.033 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:57.033 CC module/blobfs/bdev/blobfs_bdev.o 00:05:57.033 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:57.033 CC module/bdev/null/bdev_null.o 00:05:57.033 CC module/bdev/null/bdev_null_rpc.o 00:05:57.033 CC module/bdev/gpt/vbdev_gpt.o 00:05:57.033 CC module/bdev/gpt/gpt.o 00:05:57.033 CC module/bdev/error/vbdev_error.o 00:05:57.033 CC module/bdev/delay/vbdev_delay.o 00:05:57.033 CC module/bdev/iscsi/bdev_iscsi.o 00:05:57.033 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:57.033 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:57.033 CC module/bdev/error/vbdev_error_rpc.o 00:05:57.033 CC module/bdev/malloc/bdev_malloc.o 00:05:57.033 CC module/bdev/passthru/vbdev_passthru.o 00:05:57.033 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:57.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:57.033 CC module/bdev/nvme/bdev_nvme.o 00:05:57.033 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:57.033 CC module/bdev/nvme/nvme_rpc.o 00:05:57.033 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:57.033 CC module/bdev/nvme/bdev_mdns_client.o 00:05:57.033 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:57.033 CC module/bdev/nvme/vbdev_opal.o 00:05:57.033 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:57.033 CC module/bdev/lvol/vbdev_lvol.o 00:05:57.033 CC module/bdev/aio/bdev_aio_rpc.o 00:05:57.033 CC module/bdev/split/vbdev_split_rpc.o 00:05:57.033 CC module/bdev/aio/bdev_aio.o 00:05:57.033 CC module/bdev/split/vbdev_split.o 00:05:57.033 CC module/bdev/raid/bdev_raid.o 00:05:57.291 CC module/bdev/raid/bdev_raid_sb.o 00:05:57.291 CC module/bdev/raid/raid0.o 00:05:57.291 CC module/bdev/raid/bdev_raid_rpc.o 00:05:57.291 CC module/bdev/raid/raid1.o 00:05:57.291 CC module/bdev/raid/concat.o 00:05:57.291 CC module/bdev/ftl/bdev_ftl.o 00:05:57.291 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:57.291 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:57.291 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:57.291 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:57.291 LIB libspdk_blobfs_bdev.a 00:05:57.291 LIB libspdk_bdev_error.a 00:05:57.291 LIB libspdk_bdev_null.a 00:05:57.291 LIB libspdk_bdev_gpt.a 00:05:57.291 LIB libspdk_bdev_split.a 00:05:57.291 LIB libspdk_bdev_zone_block.a 00:05:57.291 LIB libspdk_bdev_iscsi.a 00:05:57.549 LIB libspdk_bdev_passthru.a 00:05:57.549 LIB libspdk_bdev_ftl.a 00:05:57.549 LIB libspdk_bdev_aio.a 00:05:57.549 LIB libspdk_bdev_delay.a 00:05:57.549 LIB libspdk_bdev_lvol.a 00:05:57.549 LIB libspdk_bdev_malloc.a 00:05:57.549 LIB libspdk_bdev_virtio.a 00:05:57.809 LIB libspdk_bdev_raid.a 00:05:58.377 LIB libspdk_bdev_nvme.a 00:05:58.945 CC module/event/subsystems/iobuf/iobuf.o 00:05:58.945 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:58.945 CC module/event/subsystems/scheduler/scheduler.o 00:05:58.945 CC module/event/subsystems/vmd/vmd.o 00:05:58.945 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:58.945 CC module/event/subsystems/keyring/keyring.o 00:05:58.945 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:58.945 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:58.945 CC module/event/subsystems/sock/sock.o 00:05:58.945 CC module/event/subsystems/fsdev/fsdev.o 00:05:59.204 LIB libspdk_event_iobuf.a 00:05:59.204 LIB libspdk_event_scheduler.a 00:05:59.204 LIB libspdk_event_keyring.a 00:05:59.204 LIB libspdk_event_vhost_blk.a 00:05:59.204 LIB libspdk_event_vmd.a 00:05:59.204 LIB libspdk_event_sock.a 00:05:59.204 LIB libspdk_event_vfu_tgt.a 00:05:59.204 LIB libspdk_event_fsdev.a 00:05:59.463 CC module/event/subsystems/accel/accel.o 00:05:59.463 LIB libspdk_event_accel.a 00:05:59.721 CC module/event/subsystems/bdev/bdev.o 00:05:59.981 LIB libspdk_event_bdev.a 00:06:00.240 CC module/event/subsystems/scsi/scsi.o 00:06:00.240 CC module/event/subsystems/nbd/nbd.o 00:06:00.240 CC module/event/subsystems/ublk/ublk.o 00:06:00.240 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:00.240 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:00.240 LIB libspdk_event_ublk.a 00:06:00.240 LIB libspdk_event_nbd.a 00:06:00.240 LIB libspdk_event_scsi.a 00:06:00.499 LIB libspdk_event_nvmf.a 00:06:00.499 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:00.758 CC module/event/subsystems/iscsi/iscsi.o 00:06:00.758 LIB libspdk_event_vhost_scsi.a 00:06:00.758 LIB libspdk_event_iscsi.a 00:06:01.020 CC test/rpc_client/rpc_client_test.o 00:06:01.020 TEST_HEADER include/spdk/accel.h 00:06:01.020 CC app/trace_record/trace_record.o 00:06:01.020 TEST_HEADER include/spdk/accel_module.h 00:06:01.020 TEST_HEADER include/spdk/assert.h 00:06:01.020 TEST_HEADER include/spdk/barrier.h 00:06:01.020 TEST_HEADER include/spdk/base64.h 00:06:01.020 TEST_HEADER include/spdk/bdev.h 00:06:01.020 TEST_HEADER include/spdk/bdev_zone.h 00:06:01.020 TEST_HEADER include/spdk/bit_array.h 00:06:01.020 TEST_HEADER include/spdk/bdev_module.h 00:06:01.020 TEST_HEADER include/spdk/bit_pool.h 00:06:01.020 TEST_HEADER include/spdk/blob_bdev.h 00:06:01.020 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:01.020 TEST_HEADER include/spdk/blobfs.h 00:06:01.020 TEST_HEADER include/spdk/blob.h 00:06:01.020 TEST_HEADER include/spdk/conf.h 00:06:01.020 TEST_HEADER include/spdk/config.h 00:06:01.020 TEST_HEADER include/spdk/cpuset.h 00:06:01.020 TEST_HEADER include/spdk/crc16.h 00:06:01.020 TEST_HEADER include/spdk/crc32.h 00:06:01.020 TEST_HEADER include/spdk/crc64.h 00:06:01.020 TEST_HEADER include/spdk/dif.h 00:06:01.020 TEST_HEADER include/spdk/dma.h 00:06:01.020 CXX app/trace/trace.o 00:06:01.020 TEST_HEADER include/spdk/env_dpdk.h 00:06:01.020 TEST_HEADER include/spdk/env.h 00:06:01.020 TEST_HEADER include/spdk/endian.h 00:06:01.020 TEST_HEADER include/spdk/fd.h 00:06:01.020 TEST_HEADER include/spdk/fd_group.h 00:06:01.020 TEST_HEADER include/spdk/file.h 00:06:01.020 TEST_HEADER include/spdk/event.h 00:06:01.020 TEST_HEADER include/spdk/fsdev.h 00:06:01.020 CC app/spdk_nvme_identify/identify.o 00:06:01.020 TEST_HEADER include/spdk/fsdev_module.h 00:06:01.020 TEST_HEADER include/spdk/ftl.h 00:06:01.020 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:01.020 TEST_HEADER include/spdk/gpt_spec.h 00:06:01.020 TEST_HEADER include/spdk/hexlify.h 00:06:01.020 TEST_HEADER include/spdk/histogram_data.h 00:06:01.020 TEST_HEADER include/spdk/idxd_spec.h 00:06:01.020 TEST_HEADER include/spdk/init.h 00:06:01.020 TEST_HEADER include/spdk/ioat.h 00:06:01.020 TEST_HEADER include/spdk/ioat_spec.h 00:06:01.020 TEST_HEADER include/spdk/idxd.h 00:06:01.020 TEST_HEADER include/spdk/iscsi_spec.h 00:06:01.020 TEST_HEADER include/spdk/json.h 00:06:01.020 CC app/spdk_top/spdk_top.o 00:06:01.020 TEST_HEADER include/spdk/jsonrpc.h 00:06:01.020 TEST_HEADER include/spdk/keyring.h 00:06:01.020 CC app/spdk_nvme_perf/perf.o 00:06:01.020 TEST_HEADER include/spdk/likely.h 00:06:01.020 TEST_HEADER include/spdk/keyring_module.h 00:06:01.020 TEST_HEADER include/spdk/lvol.h 00:06:01.020 TEST_HEADER include/spdk/log.h 00:06:01.020 TEST_HEADER include/spdk/md5.h 00:06:01.020 CC app/spdk_lspci/spdk_lspci.o 00:06:01.020 TEST_HEADER include/spdk/memory.h 00:06:01.020 TEST_HEADER include/spdk/mmio.h 00:06:01.020 TEST_HEADER include/spdk/nbd.h 00:06:01.020 TEST_HEADER include/spdk/net.h 00:06:01.020 TEST_HEADER include/spdk/notify.h 00:06:01.020 TEST_HEADER include/spdk/nvme.h 00:06:01.020 CC app/spdk_nvme_discover/discovery_aer.o 00:06:01.020 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:01.020 TEST_HEADER include/spdk/nvme_intel.h 00:06:01.020 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:01.020 TEST_HEADER include/spdk/nvme_spec.h 00:06:01.020 TEST_HEADER include/spdk/nvme_zns.h 00:06:01.020 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:01.020 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:01.020 TEST_HEADER include/spdk/nvmf.h 00:06:01.020 TEST_HEADER include/spdk/nvmf_spec.h 00:06:01.020 TEST_HEADER include/spdk/nvmf_transport.h 00:06:01.020 TEST_HEADER include/spdk/opal_spec.h 00:06:01.020 TEST_HEADER include/spdk/opal.h 00:06:01.020 TEST_HEADER include/spdk/pci_ids.h 00:06:01.020 TEST_HEADER include/spdk/pipe.h 00:06:01.020 TEST_HEADER include/spdk/queue.h 00:06:01.020 TEST_HEADER include/spdk/reduce.h 00:06:01.020 TEST_HEADER include/spdk/rpc.h 00:06:01.020 TEST_HEADER include/spdk/scheduler.h 00:06:01.020 TEST_HEADER include/spdk/scsi.h 00:06:01.020 TEST_HEADER include/spdk/scsi_spec.h 00:06:01.020 TEST_HEADER include/spdk/sock.h 00:06:01.020 TEST_HEADER include/spdk/stdinc.h 00:06:01.020 TEST_HEADER include/spdk/string.h 00:06:01.020 TEST_HEADER include/spdk/thread.h 00:06:01.020 TEST_HEADER include/spdk/trace.h 00:06:01.020 TEST_HEADER include/spdk/trace_parser.h 00:06:01.020 TEST_HEADER include/spdk/tree.h 00:06:01.020 TEST_HEADER include/spdk/ublk.h 00:06:01.020 TEST_HEADER include/spdk/util.h 00:06:01.020 TEST_HEADER include/spdk/uuid.h 00:06:01.020 TEST_HEADER include/spdk/version.h 00:06:01.020 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:01.020 CC app/spdk_dd/spdk_dd.o 00:06:01.020 TEST_HEADER include/spdk/vhost.h 00:06:01.020 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:01.020 TEST_HEADER include/spdk/vmd.h 00:06:01.020 TEST_HEADER include/spdk/zipf.h 00:06:01.020 TEST_HEADER include/spdk/xor.h 00:06:01.020 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:01.020 CXX test/cpp_headers/accel.o 00:06:01.020 CXX test/cpp_headers/accel_module.o 00:06:01.020 CXX test/cpp_headers/assert.o 00:06:01.020 CXX test/cpp_headers/barrier.o 00:06:01.020 CXX test/cpp_headers/base64.o 00:06:01.020 CXX test/cpp_headers/bdev.o 00:06:01.020 CXX test/cpp_headers/bdev_module.o 00:06:01.020 CXX test/cpp_headers/bdev_zone.o 00:06:01.020 CC app/nvmf_tgt/nvmf_main.o 00:06:01.020 CXX test/cpp_headers/blob_bdev.o 00:06:01.020 CXX test/cpp_headers/bit_pool.o 00:06:01.020 CXX test/cpp_headers/bit_array.o 00:06:01.020 CXX test/cpp_headers/blobfs_bdev.o 00:06:01.020 CXX test/cpp_headers/blobfs.o 00:06:01.020 CXX test/cpp_headers/blob.o 00:06:01.020 CXX test/cpp_headers/conf.o 00:06:01.020 CXX test/cpp_headers/config.o 00:06:01.020 CXX test/cpp_headers/cpuset.o 00:06:01.020 CC app/iscsi_tgt/iscsi_tgt.o 00:06:01.020 CXX test/cpp_headers/crc16.o 00:06:01.020 CXX test/cpp_headers/crc32.o 00:06:01.020 CXX test/cpp_headers/crc64.o 00:06:01.020 CXX test/cpp_headers/dif.o 00:06:01.020 CXX test/cpp_headers/dma.o 00:06:01.020 CXX test/cpp_headers/endian.o 00:06:01.021 CXX test/cpp_headers/env_dpdk.o 00:06:01.021 CXX test/cpp_headers/env.o 00:06:01.021 CXX test/cpp_headers/event.o 00:06:01.021 CXX test/cpp_headers/fd_group.o 00:06:01.021 CXX test/cpp_headers/fd.o 00:06:01.021 CXX test/cpp_headers/file.o 00:06:01.021 CXX test/cpp_headers/fsdev.o 00:06:01.021 CXX test/cpp_headers/fsdev_module.o 00:06:01.021 CXX test/cpp_headers/ftl.o 00:06:01.021 CXX test/cpp_headers/fuse_dispatcher.o 00:06:01.021 CXX test/cpp_headers/gpt_spec.o 00:06:01.021 CXX test/cpp_headers/hexlify.o 00:06:01.021 CXX test/cpp_headers/histogram_data.o 00:06:01.021 CXX test/cpp_headers/idxd_spec.o 00:06:01.021 CXX test/cpp_headers/idxd.o 00:06:01.021 CXX test/cpp_headers/init.o 00:06:01.021 CXX test/cpp_headers/ioat.o 00:06:01.021 CXX test/cpp_headers/ioat_spec.o 00:06:01.021 CC test/app/stub/stub.o 00:06:01.021 CC test/app/histogram_perf/histogram_perf.o 00:06:01.021 CC test/thread/poller_perf/poller_perf.o 00:06:01.021 CC test/env/vtophys/vtophys.o 00:06:01.021 CC test/env/pci/pci_ut.o 00:06:01.286 CC examples/ioat/perf/perf.o 00:06:01.286 CC test/env/memory/memory_ut.o 00:06:01.286 CXX test/cpp_headers/iscsi_spec.o 00:06:01.286 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:01.286 CC test/thread/lock/spdk_lock.o 00:06:01.286 CC test/app/jsoncat/jsoncat.o 00:06:01.286 CC examples/ioat/verify/verify.o 00:06:01.286 CC examples/util/zipf/zipf.o 00:06:01.286 CC app/fio/nvme/fio_plugin.o 00:06:01.286 CC app/spdk_tgt/spdk_tgt.o 00:06:01.286 CC test/dma/test_dma/test_dma.o 00:06:01.286 CC test/app/bdev_svc/bdev_svc.o 00:06:01.286 CC test/env/mem_callbacks/mem_callbacks.o 00:06:01.286 CC app/fio/bdev/fio_plugin.o 00:06:01.286 LINK spdk_lspci 00:06:01.286 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:01.286 LINK rpc_client_test 00:06:01.286 CXX test/cpp_headers/json.o 00:06:01.286 CXX test/cpp_headers/jsonrpc.o 00:06:01.286 LINK vtophys 00:06:01.286 CXX test/cpp_headers/keyring.o 00:06:01.286 LINK spdk_trace_record 00:06:01.286 CXX test/cpp_headers/keyring_module.o 00:06:01.286 LINK spdk_nvme_discover 00:06:01.286 LINK histogram_perf 00:06:01.286 CXX test/cpp_headers/likely.o 00:06:01.286 CXX test/cpp_headers/log.o 00:06:01.286 CXX test/cpp_headers/lvol.o 00:06:01.286 CXX test/cpp_headers/md5.o 00:06:01.286 CXX test/cpp_headers/memory.o 00:06:01.286 CXX test/cpp_headers/mmio.o 00:06:01.286 CXX test/cpp_headers/nbd.o 00:06:01.286 LINK poller_perf 00:06:01.286 CXX test/cpp_headers/net.o 00:06:01.286 CXX test/cpp_headers/notify.o 00:06:01.286 CXX test/cpp_headers/nvme.o 00:06:01.286 CXX test/cpp_headers/nvme_intel.o 00:06:01.286 CXX test/cpp_headers/nvme_ocssd.o 00:06:01.286 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:01.286 CXX test/cpp_headers/nvme_spec.o 00:06:01.286 CXX test/cpp_headers/nvme_zns.o 00:06:01.286 CXX test/cpp_headers/nvmf_cmd.o 00:06:01.286 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:01.286 LINK jsoncat 00:06:01.286 CXX test/cpp_headers/nvmf.o 00:06:01.286 CXX test/cpp_headers/nvmf_spec.o 00:06:01.286 CXX test/cpp_headers/nvmf_transport.o 00:06:01.286 CXX test/cpp_headers/opal.o 00:06:01.286 CXX test/cpp_headers/opal_spec.o 00:06:01.286 CXX test/cpp_headers/pci_ids.o 00:06:01.286 CXX test/cpp_headers/pipe.o 00:06:01.286 CXX test/cpp_headers/queue.o 00:06:01.286 CXX test/cpp_headers/reduce.o 00:06:01.286 CXX test/cpp_headers/rpc.o 00:06:01.286 CXX test/cpp_headers/scheduler.o 00:06:01.286 CXX test/cpp_headers/scsi.o 00:06:01.286 CXX test/cpp_headers/scsi_spec.o 00:06:01.286 LINK zipf 00:06:01.286 LINK interrupt_tgt 00:06:01.286 CXX test/cpp_headers/sock.o 00:06:01.286 LINK nvmf_tgt 00:06:01.286 CXX test/cpp_headers/stdinc.o 00:06:01.286 LINK env_dpdk_post_init 00:06:01.286 CXX test/cpp_headers/string.o 00:06:01.286 LINK stub 00:06:01.286 CXX test/cpp_headers/thread.o 00:06:01.286 CXX test/cpp_headers/trace.o 00:06:01.545 LINK iscsi_tgt 00:06:01.545 LINK verify 00:06:01.545 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:01.545 LINK ioat_perf 00:06:01.545 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:01.545 LINK bdev_svc 00:06:01.545 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:06:01.545 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:06:01.545 LINK spdk_tgt 00:06:01.545 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:01.545 CXX test/cpp_headers/trace_parser.o 00:06:01.545 CXX test/cpp_headers/tree.o 00:06:01.545 CXX test/cpp_headers/ublk.o 00:06:01.545 CXX test/cpp_headers/util.o 00:06:01.545 CXX test/cpp_headers/uuid.o 00:06:01.545 CXX test/cpp_headers/version.o 00:06:01.545 CXX test/cpp_headers/vfio_user_pci.o 00:06:01.545 CXX test/cpp_headers/vfio_user_spec.o 00:06:01.545 CXX test/cpp_headers/vhost.o 00:06:01.545 CXX test/cpp_headers/vmd.o 00:06:01.545 CXX test/cpp_headers/xor.o 00:06:01.545 CXX test/cpp_headers/zipf.o 00:06:01.545 LINK spdk_trace 00:06:01.804 LINK pci_ut 00:06:01.804 LINK spdk_dd 00:06:01.804 LINK test_dma 00:06:01.804 LINK spdk_nvme 00:06:01.804 LINK nvme_fuzz 00:06:01.804 LINK mem_callbacks 00:06:01.804 LINK llvm_vfio_fuzz 00:06:01.804 LINK spdk_bdev 00:06:02.062 LINK vhost_fuzz 00:06:02.062 LINK spdk_nvme_perf 00:06:02.062 CC examples/idxd/perf/perf.o 00:06:02.062 LINK spdk_nvme_identify 00:06:02.062 CC examples/sock/hello_world/hello_sock.o 00:06:02.062 LINK spdk_top 00:06:02.062 CC examples/vmd/lsvmd/lsvmd.o 00:06:02.062 CC examples/thread/thread/thread_ex.o 00:06:02.062 CC examples/vmd/led/led.o 00:06:02.062 LINK llvm_nvme_fuzz 00:06:02.062 LINK lsvmd 00:06:02.062 LINK led 00:06:02.062 LINK hello_sock 00:06:02.062 CC app/vhost/vhost.o 00:06:02.062 LINK thread 00:06:02.062 LINK idxd_perf 00:06:02.320 LINK memory_ut 00:06:02.320 LINK vhost 00:06:02.320 LINK spdk_lock 00:06:02.579 LINK iscsi_fuzz 00:06:02.837 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:02.837 CC examples/nvme/hello_world/hello_world.o 00:06:02.837 CC examples/nvme/reconnect/reconnect.o 00:06:02.837 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:02.837 CC examples/nvme/arbitration/arbitration.o 00:06:02.837 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:02.837 CC examples/nvme/hotplug/hotplug.o 00:06:02.837 CC examples/nvme/abort/abort.o 00:06:03.095 CC test/event/reactor/reactor.o 00:06:03.095 CC test/event/reactor_perf/reactor_perf.o 00:06:03.095 CC test/event/event_perf/event_perf.o 00:06:03.095 LINK pmr_persistence 00:06:03.095 LINK cmb_copy 00:06:03.095 CC test/event/app_repeat/app_repeat.o 00:06:03.095 LINK hello_world 00:06:03.095 LINK hotplug 00:06:03.095 CC test/event/scheduler/scheduler.o 00:06:03.095 LINK reactor 00:06:03.095 LINK reactor_perf 00:06:03.095 LINK event_perf 00:06:03.095 LINK reconnect 00:06:03.095 LINK arbitration 00:06:03.095 LINK abort 00:06:03.095 LINK nvme_manage 00:06:03.095 LINK app_repeat 00:06:03.353 LINK scheduler 00:06:03.353 CC test/nvme/reserve/reserve.o 00:06:03.353 CC test/nvme/connect_stress/connect_stress.o 00:06:03.353 CC test/nvme/reset/reset.o 00:06:03.353 CC test/nvme/compliance/nvme_compliance.o 00:06:03.353 CC test/nvme/sgl/sgl.o 00:06:03.353 CC test/nvme/simple_copy/simple_copy.o 00:06:03.353 CC test/nvme/err_injection/err_injection.o 00:06:03.353 CC test/nvme/overhead/overhead.o 00:06:03.353 CC test/nvme/fdp/fdp.o 00:06:03.353 CC test/nvme/cuse/cuse.o 00:06:03.353 CC test/nvme/fused_ordering/fused_ordering.o 00:06:03.353 CC test/nvme/aer/aer.o 00:06:03.353 CC test/nvme/startup/startup.o 00:06:03.353 CC test/nvme/e2edp/nvme_dp.o 00:06:03.353 CC test/nvme/boot_partition/boot_partition.o 00:06:03.353 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:03.353 CC test/lvol/esnap/esnap.o 00:06:03.353 CC test/blobfs/mkfs/mkfs.o 00:06:03.353 CC test/accel/dif/dif.o 00:06:03.353 LINK connect_stress 00:06:03.353 LINK startup 00:06:03.353 LINK boot_partition 00:06:03.353 LINK reserve 00:06:03.611 LINK err_injection 00:06:03.611 LINK fused_ordering 00:06:03.611 LINK doorbell_aers 00:06:03.611 LINK simple_copy 00:06:03.611 LINK reset 00:06:03.611 LINK overhead 00:06:03.611 LINK fdp 00:06:03.611 LINK aer 00:06:03.611 LINK nvme_dp 00:06:03.611 LINK sgl 00:06:03.611 LINK nvme_compliance 00:06:03.611 LINK mkfs 00:06:03.869 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:03.869 CC examples/accel/perf/accel_perf.o 00:06:03.869 CC examples/blob/cli/blobcli.o 00:06:03.869 CC examples/blob/hello_world/hello_blob.o 00:06:04.127 LINK dif 00:06:04.127 LINK hello_fsdev 00:06:04.127 LINK hello_blob 00:06:04.127 LINK cuse 00:06:04.385 LINK accel_perf 00:06:04.385 LINK blobcli 00:06:04.952 CC examples/bdev/hello_world/hello_bdev.o 00:06:04.952 CC examples/bdev/bdevperf/bdevperf.o 00:06:05.210 LINK hello_bdev 00:06:05.468 LINK bdevperf 00:06:05.726 CC test/bdev/bdevio/bdevio.o 00:06:05.985 LINK bdevio 00:06:06.921 LINK esnap 00:06:06.921 CC examples/nvmf/nvmf/nvmf.o 00:06:07.180 LINK nvmf 00:06:08.554 00:06:08.554 real 0m45.552s 00:06:08.554 user 6m55.308s 00:06:08.554 sys 2m16.918s 00:06:08.554 11:05:48 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:08.554 11:05:48 make -- common/autotest_common.sh@10 -- $ set +x 00:06:08.554 ************************************ 00:06:08.554 END TEST make 00:06:08.554 ************************************ 00:06:08.554 11:05:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:08.554 11:05:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:08.554 11:05:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:08.554 11:05:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.554 11:05:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:08.554 11:05:48 -- pm/common@44 -- $ pid=3604445 00:06:08.554 11:05:48 -- pm/common@50 -- $ kill -TERM 3604445 00:06:08.554 11:05:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.554 11:05:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:08.554 11:05:48 -- pm/common@44 -- $ pid=3604447 00:06:08.554 11:05:48 -- pm/common@50 -- $ kill -TERM 3604447 00:06:08.554 11:05:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.554 11:05:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:08.554 11:05:49 -- pm/common@44 -- $ pid=3604450 00:06:08.554 11:05:49 -- pm/common@50 -- $ kill -TERM 3604450 00:06:08.554 11:05:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.554 11:05:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:08.554 11:05:49 -- pm/common@44 -- $ pid=3604477 00:06:08.554 11:05:49 -- pm/common@50 -- $ sudo -E kill -TERM 3604477 00:06:08.554 11:05:49 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.554 11:05:49 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.554 11:05:49 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.813 11:05:49 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.813 11:05:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.813 11:05:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.813 11:05:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.813 11:05:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.813 11:05:49 -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.813 11:05:49 -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.813 11:05:49 -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.813 11:05:49 -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.813 11:05:49 -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.813 11:05:49 -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.813 11:05:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.813 11:05:49 -- scripts/common.sh@344 -- # case "$op" in 00:06:08.813 11:05:49 -- scripts/common.sh@345 -- # : 1 00:06:08.813 11:05:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.813 11:05:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.813 11:05:49 -- scripts/common.sh@365 -- # decimal 1 00:06:08.813 11:05:49 -- scripts/common.sh@353 -- # local d=1 00:06:08.813 11:05:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.813 11:05:49 -- scripts/common.sh@355 -- # echo 1 00:06:08.813 11:05:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.813 11:05:49 -- scripts/common.sh@366 -- # decimal 2 00:06:08.813 11:05:49 -- scripts/common.sh@353 -- # local d=2 00:06:08.813 11:05:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.813 11:05:49 -- scripts/common.sh@355 -- # echo 2 00:06:08.813 11:05:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.813 11:05:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.813 11:05:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.813 11:05:49 -- scripts/common.sh@368 -- # return 0 00:06:08.813 11:05:49 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.813 11:05:49 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.813 --rc genhtml_branch_coverage=1 00:06:08.813 --rc genhtml_function_coverage=1 00:06:08.813 --rc genhtml_legend=1 00:06:08.813 --rc geninfo_all_blocks=1 00:06:08.813 --rc geninfo_unexecuted_blocks=1 00:06:08.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.813 ' 00:06:08.813 11:05:49 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.813 --rc genhtml_branch_coverage=1 00:06:08.813 --rc genhtml_function_coverage=1 00:06:08.813 --rc genhtml_legend=1 00:06:08.813 --rc geninfo_all_blocks=1 00:06:08.813 --rc geninfo_unexecuted_blocks=1 00:06:08.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.813 ' 00:06:08.813 11:05:49 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.813 --rc genhtml_branch_coverage=1 00:06:08.813 --rc genhtml_function_coverage=1 00:06:08.813 --rc genhtml_legend=1 00:06:08.813 --rc geninfo_all_blocks=1 00:06:08.813 --rc geninfo_unexecuted_blocks=1 00:06:08.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.813 ' 00:06:08.813 11:05:49 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.813 --rc genhtml_branch_coverage=1 00:06:08.813 --rc genhtml_function_coverage=1 00:06:08.813 --rc genhtml_legend=1 00:06:08.813 --rc geninfo_all_blocks=1 00:06:08.813 --rc geninfo_unexecuted_blocks=1 00:06:08.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.813 ' 00:06:08.813 11:05:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.813 11:05:49 -- nvmf/common.sh@7 -- # uname -s 00:06:08.813 11:05:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.813 11:05:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.813 11:05:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.813 11:05:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.813 11:05:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.813 11:05:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.813 11:05:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.813 11:05:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.813 11:05:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.813 11:05:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.813 11:05:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:08.813 11:05:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:08.813 11:05:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.813 11:05:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.813 11:05:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.813 11:05:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.813 11:05:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:08.813 11:05:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.813 11:05:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.813 11:05:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.813 11:05:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.813 11:05:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.813 11:05:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.813 11:05:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.813 11:05:49 -- paths/export.sh@5 -- # export PATH 00:06:08.813 11:05:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.813 11:05:49 -- nvmf/common.sh@51 -- # : 0 00:06:08.813 11:05:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.813 11:05:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.813 11:05:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.813 11:05:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.813 11:05:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.813 11:05:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.813 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.813 11:05:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.813 11:05:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.813 11:05:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.813 11:05:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:08.813 11:05:49 -- spdk/autotest.sh@32 -- # uname -s 00:06:08.813 11:05:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:08.813 11:05:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:08.813 11:05:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:06:08.813 11:05:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:08.813 11:05:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:06:08.813 11:05:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:08.813 11:05:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:08.813 11:05:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:08.813 11:05:49 -- spdk/autotest.sh@48 -- # udevadm_pid=3663041 00:06:08.813 11:05:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:08.813 11:05:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:08.813 11:05:49 -- pm/common@17 -- # local monitor 00:06:08.813 11:05:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.813 11:05:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.813 11:05:49 -- pm/common@21 -- # date +%s 00:06:08.813 11:05:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.813 11:05:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:08.813 11:05:49 -- pm/common@21 -- # date +%s 00:06:08.813 11:05:49 -- pm/common@21 -- # date +%s 00:06:08.813 11:05:49 -- pm/common@25 -- # sleep 1 00:06:08.813 11:05:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728983149 00:06:08.813 11:05:49 -- pm/common@21 -- # date +%s 00:06:08.813 11:05:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728983149 00:06:08.813 11:05:49 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728983149 00:06:08.813 11:05:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728983149 00:06:08.813 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728983149_collect-vmstat.pm.log 00:06:08.813 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728983149_collect-cpu-temp.pm.log 00:06:08.813 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728983149_collect-cpu-load.pm.log 00:06:08.813 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728983149_collect-bmc-pm.bmc.pm.log 00:06:09.748 11:05:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:09.748 11:05:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:09.748 11:05:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.748 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:09.748 11:05:50 -- spdk/autotest.sh@59 -- # create_test_list 00:06:09.748 11:05:50 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:09.748 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:06:09.748 11:05:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:06:09.748 11:05:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.748 11:05:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.748 11:05:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:06:09.748 11:05:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.748 11:05:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:09.748 11:05:50 -- common/autotest_common.sh@1455 -- # uname 00:06:09.748 11:05:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:09.748 11:05:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:09.748 11:05:50 -- common/autotest_common.sh@1475 -- # uname 00:06:09.748 11:05:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:09.748 11:05:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:09.748 11:05:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:06:10.006 lcov: LCOV version 1.15 00:06:10.006 11:05:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:06:18.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:18.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:06:26.486 11:06:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:26.486 11:06:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.486 11:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:26.486 11:06:06 -- spdk/autotest.sh@78 -- # rm -f 00:06:26.486 11:06:06 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:28.386 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:06:28.643 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:28.643 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:28.906 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:28.906 11:06:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:28.906 11:06:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:28.906 11:06:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:28.906 11:06:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:28.906 11:06:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:28.906 11:06:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:28.906 11:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:28.906 11:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:28.906 11:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:28.906 11:06:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:28.906 11:06:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:28.906 11:06:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:28.906 11:06:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:28.906 11:06:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:28.906 11:06:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:28.906 No valid GPT data, bailing 00:06:29.164 11:06:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:29.164 11:06:09 -- scripts/common.sh@394 -- # pt= 00:06:29.164 11:06:09 -- scripts/common.sh@395 -- # return 1 00:06:29.164 11:06:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:29.164 1+0 records in 00:06:29.164 1+0 records out 00:06:29.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00154184 s, 680 MB/s 00:06:29.164 11:06:09 -- spdk/autotest.sh@105 -- # sync 00:06:29.164 11:06:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:29.164 11:06:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:29.164 11:06:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:34.433 11:06:14 -- spdk/autotest.sh@111 -- # uname -s 00:06:34.433 11:06:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:34.433 11:06:14 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:06:34.433 11:06:14 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:06:34.433 11:06:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.433 11:06:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.433 11:06:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.433 ************************************ 00:06:34.433 START TEST setup.sh 00:06:34.433 ************************************ 00:06:34.433 11:06:14 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:06:34.433 * Looking for test storage... 00:06:34.433 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:34.433 11:06:14 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.433 11:06:14 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.433 11:06:14 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.433 11:06:14 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.433 11:06:14 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@345 -- # : 1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@353 -- # local d=1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@355 -- # echo 1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@353 -- # local d=2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@355 -- # echo 2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.434 11:06:14 setup.sh -- scripts/common.sh@368 -- # return 0 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.434 --rc genhtml_branch_coverage=1 00:06:34.434 --rc genhtml_function_coverage=1 00:06:34.434 --rc genhtml_legend=1 00:06:34.434 --rc geninfo_all_blocks=1 00:06:34.434 --rc geninfo_unexecuted_blocks=1 00:06:34.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.434 ' 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.434 --rc genhtml_branch_coverage=1 00:06:34.434 --rc genhtml_function_coverage=1 00:06:34.434 --rc genhtml_legend=1 00:06:34.434 --rc geninfo_all_blocks=1 00:06:34.434 --rc geninfo_unexecuted_blocks=1 00:06:34.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.434 ' 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.434 --rc genhtml_branch_coverage=1 00:06:34.434 --rc genhtml_function_coverage=1 00:06:34.434 --rc genhtml_legend=1 00:06:34.434 --rc geninfo_all_blocks=1 00:06:34.434 --rc geninfo_unexecuted_blocks=1 00:06:34.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.434 ' 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.434 --rc genhtml_branch_coverage=1 00:06:34.434 --rc genhtml_function_coverage=1 00:06:34.434 --rc genhtml_legend=1 00:06:34.434 --rc geninfo_all_blocks=1 00:06:34.434 --rc geninfo_unexecuted_blocks=1 00:06:34.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.434 ' 00:06:34.434 11:06:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:34.434 11:06:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:34.434 11:06:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.434 11:06:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:34.434 ************************************ 00:06:34.434 START TEST acl 00:06:34.434 ************************************ 00:06:34.434 11:06:14 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:06:34.434 * Looking for test storage... 00:06:34.434 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:34.434 11:06:15 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.434 11:06:15 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.434 11:06:15 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.693 11:06:15 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.693 11:06:15 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:06:34.693 11:06:15 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.693 11:06:15 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.693 --rc genhtml_branch_coverage=1 00:06:34.693 --rc genhtml_function_coverage=1 00:06:34.693 --rc genhtml_legend=1 00:06:34.693 --rc geninfo_all_blocks=1 00:06:34.693 --rc geninfo_unexecuted_blocks=1 00:06:34.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.693 ' 00:06:34.693 11:06:15 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.693 --rc genhtml_branch_coverage=1 00:06:34.693 --rc genhtml_function_coverage=1 00:06:34.693 --rc genhtml_legend=1 00:06:34.693 --rc geninfo_all_blocks=1 00:06:34.693 --rc geninfo_unexecuted_blocks=1 00:06:34.693 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.693 ' 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.694 --rc genhtml_branch_coverage=1 00:06:34.694 --rc genhtml_function_coverage=1 00:06:34.694 --rc genhtml_legend=1 00:06:34.694 --rc geninfo_all_blocks=1 00:06:34.694 --rc geninfo_unexecuted_blocks=1 00:06:34.694 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.694 ' 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.694 --rc genhtml_branch_coverage=1 00:06:34.694 --rc genhtml_function_coverage=1 00:06:34.694 --rc genhtml_legend=1 00:06:34.694 --rc geninfo_all_blocks=1 00:06:34.694 --rc geninfo_unexecuted_blocks=1 00:06:34.694 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.694 ' 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:34.694 11:06:15 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:34.694 11:06:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:34.694 11:06:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:34.694 11:06:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:37.980 11:06:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:37.981 11:06:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:37.981 11:06:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:37.981 11:06:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:37.981 11:06:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:37.981 11:06:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:41.267 Hugepages 00:06:41.267 node hugesize free / total 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 00:06:41.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:41.267 11:06:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:41.267 11:06:21 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.267 11:06:21 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.267 11:06:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:41.267 ************************************ 00:06:41.267 START TEST denied 00:06:41.267 ************************************ 00:06:41.267 11:06:21 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:41.267 11:06:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:06:41.267 11:06:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:06:41.267 11:06:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:41.267 11:06:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:41.267 11:06:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:44.550 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:44.550 11:06:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:48.730 00:06:48.730 real 0m6.953s 00:06:48.730 user 0m2.120s 00:06:48.730 sys 0m4.115s 00:06:48.730 11:06:28 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.730 11:06:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:48.730 ************************************ 00:06:48.730 END TEST denied 00:06:48.730 ************************************ 00:06:48.730 11:06:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:48.730 11:06:28 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.730 11:06:28 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.730 11:06:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:48.730 ************************************ 00:06:48.730 START TEST allowed 00:06:48.730 ************************************ 00:06:48.730 11:06:28 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:48.730 11:06:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:06:48.730 11:06:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:06:48.730 11:06:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:48.730 11:06:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:48.730 11:06:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:55.286 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:55.286 11:06:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:55.286 11:06:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:55.286 11:06:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:55.286 11:06:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:55.286 11:06:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:57.818 00:06:57.818 real 0m9.314s 00:06:57.818 user 0m2.094s 00:06:57.818 sys 0m4.011s 00:06:57.818 11:06:38 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.818 11:06:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:57.818 ************************************ 00:06:57.818 END TEST allowed 00:06:57.818 ************************************ 00:06:57.818 00:06:57.818 real 0m23.311s 00:06:57.818 user 0m6.805s 00:06:57.818 sys 0m12.844s 00:06:57.818 11:06:38 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.818 11:06:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:57.818 ************************************ 00:06:57.818 END TEST acl 00:06:57.818 ************************************ 00:06:57.818 11:06:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:06:57.818 11:06:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.818 11:06:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.818 11:06:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:57.818 ************************************ 00:06:57.818 START TEST hugepages 00:06:57.818 ************************************ 00:06:57.818 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:06:57.818 * Looking for test storage... 00:06:57.818 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:57.818 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.818 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.818 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:58.078 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.078 11:06:38 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:06:58.078 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.078 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.078 --rc genhtml_branch_coverage=1 00:06:58.078 --rc genhtml_function_coverage=1 00:06:58.078 --rc genhtml_legend=1 00:06:58.078 --rc geninfo_all_blocks=1 00:06:58.078 --rc geninfo_unexecuted_blocks=1 00:06:58.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.079 ' 00:06:58.079 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.079 --rc genhtml_branch_coverage=1 00:06:58.079 --rc genhtml_function_coverage=1 00:06:58.079 --rc genhtml_legend=1 00:06:58.079 --rc geninfo_all_blocks=1 00:06:58.079 --rc geninfo_unexecuted_blocks=1 00:06:58.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.079 ' 00:06:58.079 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.079 --rc genhtml_branch_coverage=1 00:06:58.079 --rc genhtml_function_coverage=1 00:06:58.079 --rc genhtml_legend=1 00:06:58.079 --rc geninfo_all_blocks=1 00:06:58.079 --rc geninfo_unexecuted_blocks=1 00:06:58.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.079 ' 00:06:58.079 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.079 --rc genhtml_branch_coverage=1 00:06:58.079 --rc genhtml_function_coverage=1 00:06:58.079 --rc genhtml_legend=1 00:06:58.079 --rc geninfo_all_blocks=1 00:06:58.079 --rc geninfo_unexecuted_blocks=1 00:06:58.079 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.079 ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 74088036 kB' 'MemAvailable: 77725748 kB' 'Buffers: 9752 kB' 'Cached: 11688412 kB' 'SwapCached: 0 kB' 'Active: 8649552 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164508 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663932 kB' 'Mapped: 161760 kB' 'Shmem: 7503864 kB' 'KReclaimable: 194008 kB' 'Slab: 656004 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 461996 kB' 'KernelStack: 16240 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434212 kB' 'Committed_AS: 9411736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.079 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:58.080 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:58.081 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:58.081 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:58.081 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:06:58.081 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:06:58.081 11:06:38 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:06:58.081 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.081 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.081 11:06:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:58.081 ************************************ 00:06:58.081 START TEST single_node_setup 00:06:58.081 ************************************ 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.081 11:06:38 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:01.368 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:01.368 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:04.661 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76249964 kB' 'MemAvailable: 79887660 kB' 'Buffers: 9752 kB' 'Cached: 11688540 kB' 'SwapCached: 0 kB' 'Active: 8649636 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164592 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663376 kB' 'Mapped: 161200 kB' 'Shmem: 7503992 kB' 'KReclaimable: 193976 kB' 'Slab: 655124 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461148 kB' 'KernelStack: 16416 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9416048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.661 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:44 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.662 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76251416 kB' 'MemAvailable: 79889112 kB' 'Buffers: 9752 kB' 'Cached: 11688540 kB' 'SwapCached: 0 kB' 'Active: 8648728 kB' 'Inactive: 3709256 kB' 'Active(anon): 8163684 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662856 kB' 'Mapped: 161088 kB' 'Shmem: 7503992 kB' 'KReclaimable: 193976 kB' 'Slab: 654976 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461000 kB' 'KernelStack: 16240 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9414692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.663 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:04.664 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76249404 kB' 'MemAvailable: 79887100 kB' 'Buffers: 9752 kB' 'Cached: 11688540 kB' 'SwapCached: 0 kB' 'Active: 8649020 kB' 'Inactive: 3709256 kB' 'Active(anon): 8163976 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663156 kB' 'Mapped: 161088 kB' 'Shmem: 7503992 kB' 'KReclaimable: 193976 kB' 'Slab: 654976 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461000 kB' 'KernelStack: 16240 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9415840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199104 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.665 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:07:04.666 nr_hugepages=1024 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:04.666 resv_hugepages=0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:04.666 surplus_hugepages=0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:04.666 anon_hugepages=0 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:04.666 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76249328 kB' 'MemAvailable: 79887024 kB' 'Buffers: 9752 kB' 'Cached: 11688584 kB' 'SwapCached: 0 kB' 'Active: 8648504 kB' 'Inactive: 3709256 kB' 'Active(anon): 8163460 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662588 kB' 'Mapped: 161088 kB' 'Shmem: 7504036 kB' 'KReclaimable: 193976 kB' 'Slab: 654976 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461000 kB' 'KernelStack: 16272 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9416112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199104 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.667 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.668 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42294960 kB' 'MemUsed: 5819044 kB' 'SwapCached: 0 kB' 'Active: 2807632 kB' 'Inactive: 102764 kB' 'Active(anon): 2611252 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659116 kB' 'Mapped: 64664 kB' 'AnonPages: 254328 kB' 'Shmem: 2359972 kB' 'KernelStack: 9256 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345416 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 254784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.669 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:07:04.670 node0=1024 expecting 1024 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:07:04.670 00:07:04.670 real 0m6.556s 00:07:04.670 user 0m1.303s 00:07:04.670 sys 0m2.174s 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.670 11:06:45 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:07:04.670 ************************************ 00:07:04.670 END TEST single_node_setup 00:07:04.670 ************************************ 00:07:04.670 11:06:45 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:07:04.670 11:06:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.670 11:06:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.670 11:06:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:04.670 ************************************ 00:07:04.670 START TEST even_2G_alloc 00:07:04.670 ************************************ 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:04.670 11:06:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:07.962 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:07.962 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:07.962 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76273244 kB' 'MemAvailable: 79910940 kB' 'Buffers: 9752 kB' 'Cached: 11688688 kB' 'SwapCached: 0 kB' 'Active: 8649964 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164920 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664072 kB' 'Mapped: 161208 kB' 'Shmem: 7504140 kB' 'KReclaimable: 193976 kB' 'Slab: 655424 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461448 kB' 'KernelStack: 16224 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9414288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.962 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.963 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76275292 kB' 'MemAvailable: 79912988 kB' 'Buffers: 9752 kB' 'Cached: 11688692 kB' 'SwapCached: 0 kB' 'Active: 8649724 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164680 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663876 kB' 'Mapped: 161176 kB' 'Shmem: 7504144 kB' 'KReclaimable: 193976 kB' 'Slab: 655428 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461452 kB' 'KernelStack: 16224 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9414304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199088 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.964 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.965 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76275292 kB' 'MemAvailable: 79912988 kB' 'Buffers: 9752 kB' 'Cached: 11688692 kB' 'SwapCached: 0 kB' 'Active: 8650112 kB' 'Inactive: 3709256 kB' 'Active(anon): 8165068 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664264 kB' 'Mapped: 161680 kB' 'Shmem: 7504144 kB' 'KReclaimable: 193976 kB' 'Slab: 655428 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461452 kB' 'KernelStack: 16208 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9415816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199072 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.966 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.967 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:07:07.968 nr_hugepages=1024 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:07.968 resv_hugepages=0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:07.968 surplus_hugepages=0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:07.968 anon_hugepages=0 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76276240 kB' 'MemAvailable: 79913936 kB' 'Buffers: 9752 kB' 'Cached: 11688732 kB' 'SwapCached: 0 kB' 'Active: 8655352 kB' 'Inactive: 3709256 kB' 'Active(anon): 8170308 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669508 kB' 'Mapped: 162028 kB' 'Shmem: 7504184 kB' 'KReclaimable: 193976 kB' 'Slab: 655428 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461452 kB' 'KernelStack: 16224 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9420468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199076 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.968 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.969 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43356100 kB' 'MemUsed: 4757904 kB' 'SwapCached: 0 kB' 'Active: 2807604 kB' 'Inactive: 102764 kB' 'Active(anon): 2611224 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659152 kB' 'Mapped: 64676 kB' 'AnonPages: 254424 kB' 'Shmem: 2360008 kB' 'KernelStack: 8968 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345756 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 255124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.970 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 32926888 kB' 'MemUsed: 11244628 kB' 'SwapCached: 0 kB' 'Active: 5842028 kB' 'Inactive: 3606492 kB' 'Active(anon): 5553364 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9039372 kB' 'Mapped: 96500 kB' 'AnonPages: 408712 kB' 'Shmem: 5144216 kB' 'KernelStack: 7256 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103344 kB' 'Slab: 309672 kB' 'SReclaimable: 103344 kB' 'SUnreclaim: 206328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.971 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:07:07.972 node0=512 expecting 512 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:07:07.972 node1=512 expecting 512 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:07:07.972 00:07:07.972 real 0m3.332s 00:07:07.972 user 0m1.315s 00:07:07.972 sys 0m2.087s 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.972 11:06:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:07.972 ************************************ 00:07:07.972 END TEST even_2G_alloc 00:07:07.972 ************************************ 00:07:08.231 11:06:48 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:07:08.231 11:06:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.231 11:06:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.231 11:06:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:08.231 ************************************ 00:07:08.231 START TEST odd_alloc 00:07:08.231 ************************************ 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:07:08.231 11:06:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:07:08.232 11:06:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.232 11:06:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:11.531 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:11.531 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:11.531 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76313136 kB' 'MemAvailable: 79950832 kB' 'Buffers: 9752 kB' 'Cached: 11688844 kB' 'SwapCached: 0 kB' 'Active: 8655040 kB' 'Inactive: 3709256 kB' 'Active(anon): 8169996 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 669012 kB' 'Mapped: 160724 kB' 'Shmem: 7504296 kB' 'KReclaimable: 193976 kB' 'Slab: 655148 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461172 kB' 'KernelStack: 16240 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9416084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199284 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.531 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76314612 kB' 'MemAvailable: 79952308 kB' 'Buffers: 9752 kB' 'Cached: 11688844 kB' 'SwapCached: 0 kB' 'Active: 8649756 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164712 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663740 kB' 'Mapped: 160220 kB' 'Shmem: 7504296 kB' 'KReclaimable: 193976 kB' 'Slab: 655124 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461148 kB' 'KernelStack: 16304 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9408608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.532 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.533 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76314092 kB' 'MemAvailable: 79951788 kB' 'Buffers: 9752 kB' 'Cached: 11688844 kB' 'SwapCached: 0 kB' 'Active: 8648668 kB' 'Inactive: 3709256 kB' 'Active(anon): 8163624 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662640 kB' 'Mapped: 160244 kB' 'Shmem: 7504296 kB' 'KReclaimable: 193976 kB' 'Slab: 655260 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461284 kB' 'KernelStack: 16032 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9407760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.534 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.535 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:07:11.536 nr_hugepages=1025 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:11.536 resv_hugepages=0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:11.536 surplus_hugepages=0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:11.536 anon_hugepages=0 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76313960 kB' 'MemAvailable: 79951656 kB' 'Buffers: 9752 kB' 'Cached: 11688904 kB' 'SwapCached: 0 kB' 'Active: 8648188 kB' 'Inactive: 3709256 kB' 'Active(anon): 8163144 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662108 kB' 'Mapped: 160244 kB' 'Shmem: 7504356 kB' 'KReclaimable: 193976 kB' 'Slab: 655244 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461268 kB' 'KernelStack: 16048 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9407780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.536 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.537 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43375884 kB' 'MemUsed: 4738120 kB' 'SwapCached: 0 kB' 'Active: 2805984 kB' 'Inactive: 102764 kB' 'Active(anon): 2609604 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659168 kB' 'Mapped: 64208 kB' 'AnonPages: 252756 kB' 'Shmem: 2360024 kB' 'KernelStack: 8872 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345524 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 254892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.538 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 32937824 kB' 'MemUsed: 11233692 kB' 'SwapCached: 0 kB' 'Active: 5842952 kB' 'Inactive: 3606492 kB' 'Active(anon): 5554288 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9039512 kB' 'Mapped: 96036 kB' 'AnonPages: 410116 kB' 'Shmem: 5144356 kB' 'KernelStack: 7208 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103344 kB' 'Slab: 309720 kB' 'SReclaimable: 103344 kB' 'SUnreclaim: 206376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.539 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:07:11.540 node0=513 expecting 513 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:07:11.540 node1=512 expecting 512 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:07:11.540 00:07:11.540 real 0m3.215s 00:07:11.540 user 0m1.214s 00:07:11.540 sys 0m2.073s 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.540 11:06:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:11.540 ************************************ 00:07:11.540 END TEST odd_alloc 00:07:11.540 ************************************ 00:07:11.540 11:06:51 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:07:11.540 11:06:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.540 11:06:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.540 11:06:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:11.541 ************************************ 00:07:11.541 START TEST custom_alloc 00:07:11.541 ************************************ 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.541 11:06:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:14.330 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:14.330 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:14.330 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75300288 kB' 'MemAvailable: 78937984 kB' 'Buffers: 9752 kB' 'Cached: 11688996 kB' 'SwapCached: 0 kB' 'Active: 8649388 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164344 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663152 kB' 'Mapped: 160284 kB' 'Shmem: 7504448 kB' 'KReclaimable: 193976 kB' 'Slab: 655504 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461528 kB' 'KernelStack: 16080 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9408268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.596 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.597 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75301904 kB' 'MemAvailable: 78939600 kB' 'Buffers: 9752 kB' 'Cached: 11689000 kB' 'SwapCached: 0 kB' 'Active: 8649112 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164068 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662892 kB' 'Mapped: 160256 kB' 'Shmem: 7504452 kB' 'KReclaimable: 193976 kB' 'Slab: 655568 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461592 kB' 'KernelStack: 16064 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9408284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.598 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75302284 kB' 'MemAvailable: 78939980 kB' 'Buffers: 9752 kB' 'Cached: 11689016 kB' 'SwapCached: 0 kB' 'Active: 8649148 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164104 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662896 kB' 'Mapped: 160256 kB' 'Shmem: 7504468 kB' 'KReclaimable: 193976 kB' 'Slab: 655568 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461592 kB' 'KernelStack: 16064 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9408304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.599 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.600 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:07:14.601 nr_hugepages=1536 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:14.601 resv_hugepages=0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:14.601 surplus_hugepages=0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:14.601 anon_hugepages=0 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75302628 kB' 'MemAvailable: 78940324 kB' 'Buffers: 9752 kB' 'Cached: 11689040 kB' 'SwapCached: 0 kB' 'Active: 8649180 kB' 'Inactive: 3709256 kB' 'Active(anon): 8164136 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662900 kB' 'Mapped: 160256 kB' 'Shmem: 7504492 kB' 'KReclaimable: 193976 kB' 'Slab: 655568 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461592 kB' 'KernelStack: 16064 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9408328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.601 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.602 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43398448 kB' 'MemUsed: 4715556 kB' 'SwapCached: 0 kB' 'Active: 2805676 kB' 'Inactive: 102764 kB' 'Active(anon): 2609296 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659168 kB' 'Mapped: 64220 kB' 'AnonPages: 252348 kB' 'Shmem: 2360024 kB' 'KernelStack: 8872 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345624 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 254992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.603 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 31904792 kB' 'MemUsed: 12266724 kB' 'SwapCached: 0 kB' 'Active: 5843516 kB' 'Inactive: 3606492 kB' 'Active(anon): 5554852 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9039644 kB' 'Mapped: 96036 kB' 'AnonPages: 410560 kB' 'Shmem: 5144488 kB' 'KernelStack: 7192 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103344 kB' 'Slab: 309944 kB' 'SReclaimable: 103344 kB' 'SUnreclaim: 206600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.604 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.605 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:07:14.606 node0=512 expecting 512 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:07:14.606 node1=1024 expecting 1024 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:14.606 00:07:14.606 real 0m3.227s 00:07:14.606 user 0m1.289s 00:07:14.606 sys 0m2.019s 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.606 11:06:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.606 ************************************ 00:07:14.606 END TEST custom_alloc 00:07:14.606 ************************************ 00:07:14.606 11:06:55 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:14.606 11:06:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.606 11:06:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.606 11:06:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.864 ************************************ 00:07:14.864 START TEST no_shrink_alloc 00:07:14.864 ************************************ 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.864 11:06:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:17.488 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:17.488 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:17.488 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:17.488 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76348640 kB' 'MemAvailable: 79986336 kB' 'Buffers: 9752 kB' 'Cached: 11689140 kB' 'SwapCached: 0 kB' 'Active: 8650208 kB' 'Inactive: 3709256 kB' 'Active(anon): 8165164 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663772 kB' 'Mapped: 160244 kB' 'Shmem: 7504592 kB' 'KReclaimable: 193976 kB' 'Slab: 655952 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461976 kB' 'KernelStack: 15984 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9408300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199136 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76350680 kB' 'MemAvailable: 79988376 kB' 'Buffers: 9752 kB' 'Cached: 11689152 kB' 'SwapCached: 0 kB' 'Active: 8650236 kB' 'Inactive: 3709256 kB' 'Active(anon): 8165192 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663884 kB' 'Mapped: 160268 kB' 'Shmem: 7504604 kB' 'KReclaimable: 193976 kB' 'Slab: 656004 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 462028 kB' 'KernelStack: 16064 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9408692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199088 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.489 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76351124 kB' 'MemAvailable: 79988820 kB' 'Buffers: 9752 kB' 'Cached: 11689168 kB' 'SwapCached: 0 kB' 'Active: 8650248 kB' 'Inactive: 3709256 kB' 'Active(anon): 8165204 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663904 kB' 'Mapped: 160268 kB' 'Shmem: 7504620 kB' 'KReclaimable: 193976 kB' 'Slab: 656004 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 462028 kB' 'KernelStack: 16064 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9408712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199088 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.490 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:07:17.491 nr_hugepages=1024 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:17.491 resv_hugepages=0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:17.491 surplus_hugepages=0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:17.491 anon_hugepages=0 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76351124 kB' 'MemAvailable: 79988820 kB' 'Buffers: 9752 kB' 'Cached: 11689192 kB' 'SwapCached: 0 kB' 'Active: 8650272 kB' 'Inactive: 3709256 kB' 'Active(anon): 8165228 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663904 kB' 'Mapped: 160268 kB' 'Shmem: 7504644 kB' 'KReclaimable: 193976 kB' 'Slab: 656004 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 462028 kB' 'KernelStack: 16064 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9408736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199088 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.491 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42349824 kB' 'MemUsed: 5764180 kB' 'SwapCached: 0 kB' 'Active: 2805560 kB' 'Inactive: 102764 kB' 'Active(anon): 2609180 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659168 kB' 'Mapped: 64232 kB' 'AnonPages: 252252 kB' 'Shmem: 2360024 kB' 'KernelStack: 8888 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345860 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 255228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.492 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:07:17.493 node0=1024 expecting 1024 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:17.493 11:06:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:20.016 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:20.016 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:20.016 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:20.277 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:20.277 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76377904 kB' 'MemAvailable: 80015600 kB' 'Buffers: 9752 kB' 'Cached: 11689280 kB' 'SwapCached: 0 kB' 'Active: 8651480 kB' 'Inactive: 3709256 kB' 'Active(anon): 8166436 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665100 kB' 'Mapped: 160296 kB' 'Shmem: 7504732 kB' 'KReclaimable: 193976 kB' 'Slab: 655888 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461912 kB' 'KernelStack: 15952 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9411676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.277 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.278 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76376404 kB' 'MemAvailable: 80014100 kB' 'Buffers: 9752 kB' 'Cached: 11689284 kB' 'SwapCached: 0 kB' 'Active: 8651348 kB' 'Inactive: 3709256 kB' 'Active(anon): 8166304 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664904 kB' 'Mapped: 160284 kB' 'Shmem: 7504736 kB' 'KReclaimable: 193976 kB' 'Slab: 655852 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461876 kB' 'KernelStack: 16080 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9411696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199104 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.279 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:07:20.280 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76375704 kB' 'MemAvailable: 80013400 kB' 'Buffers: 9752 kB' 'Cached: 11689284 kB' 'SwapCached: 0 kB' 'Active: 8651416 kB' 'Inactive: 3709256 kB' 'Active(anon): 8166372 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664960 kB' 'Mapped: 160284 kB' 'Shmem: 7504736 kB' 'KReclaimable: 193976 kB' 'Slab: 655904 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461928 kB' 'KernelStack: 16160 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9411716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199152 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.281 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:07:20.282 nr_hugepages=1024 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:07:20.282 resv_hugepages=0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:07:20.282 surplus_hugepages=0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:07:20.282 anon_hugepages=0 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:20.282 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76375724 kB' 'MemAvailable: 80013420 kB' 'Buffers: 9752 kB' 'Cached: 11689284 kB' 'SwapCached: 0 kB' 'Active: 8651564 kB' 'Inactive: 3709256 kB' 'Active(anon): 8166520 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665108 kB' 'Mapped: 160284 kB' 'Shmem: 7504736 kB' 'KReclaimable: 193976 kB' 'Slab: 655904 kB' 'SReclaimable: 193976 kB' 'SUnreclaim: 461928 kB' 'KernelStack: 16256 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9411740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199200 kB' 'VmallocChunk: 0 kB' 'Percpu: 47808 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.283 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:20.284 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42355220 kB' 'MemUsed: 5758784 kB' 'SwapCached: 0 kB' 'Active: 2806092 kB' 'Inactive: 102764 kB' 'Active(anon): 2609712 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2659168 kB' 'Mapped: 64244 kB' 'AnonPages: 252808 kB' 'Shmem: 2360024 kB' 'KernelStack: 9048 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 90632 kB' 'Slab: 345688 kB' 'SReclaimable: 90632 kB' 'SUnreclaim: 255056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.543 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:07:20.544 node0=1024 expecting 1024 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:07:20.544 00:07:20.544 real 0m5.690s 00:07:20.544 user 0m2.084s 00:07:20.544 sys 0m3.665s 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.544 11:07:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:20.544 ************************************ 00:07:20.544 END TEST no_shrink_alloc 00:07:20.544 ************************************ 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:20.544 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:07:20.545 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:07:20.545 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:07:20.545 00:07:20.545 real 0m22.682s 00:07:20.545 user 0m7.513s 00:07:20.545 sys 0m12.422s 00:07:20.545 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.545 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:20.545 ************************************ 00:07:20.545 END TEST hugepages 00:07:20.545 ************************************ 00:07:20.545 11:07:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:07:20.545 11:07:01 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.545 11:07:01 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.545 11:07:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:20.545 ************************************ 00:07:20.545 START TEST driver 00:07:20.545 ************************************ 00:07:20.545 11:07:01 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:07:20.545 * Looking for test storage... 00:07:20.545 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:07:20.545 11:07:01 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:20.545 11:07:01 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:07:20.545 11:07:01 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.803 11:07:01 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:20.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.803 --rc genhtml_branch_coverage=1 00:07:20.803 --rc genhtml_function_coverage=1 00:07:20.803 --rc genhtml_legend=1 00:07:20.803 --rc geninfo_all_blocks=1 00:07:20.803 --rc geninfo_unexecuted_blocks=1 00:07:20.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:20.803 ' 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:20.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.803 --rc genhtml_branch_coverage=1 00:07:20.803 --rc genhtml_function_coverage=1 00:07:20.803 --rc genhtml_legend=1 00:07:20.803 --rc geninfo_all_blocks=1 00:07:20.803 --rc geninfo_unexecuted_blocks=1 00:07:20.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:20.803 ' 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:20.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.803 --rc genhtml_branch_coverage=1 00:07:20.803 --rc genhtml_function_coverage=1 00:07:20.803 --rc genhtml_legend=1 00:07:20.803 --rc geninfo_all_blocks=1 00:07:20.803 --rc geninfo_unexecuted_blocks=1 00:07:20.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:20.803 ' 00:07:20.803 11:07:01 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:20.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.803 --rc genhtml_branch_coverage=1 00:07:20.803 --rc genhtml_function_coverage=1 00:07:20.803 --rc genhtml_legend=1 00:07:20.803 --rc geninfo_all_blocks=1 00:07:20.803 --rc geninfo_unexecuted_blocks=1 00:07:20.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:20.803 ' 00:07:20.803 11:07:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:20.803 11:07:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:20.803 11:07:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:26.063 11:07:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:26.063 11:07:05 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.063 11:07:05 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.063 11:07:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:26.063 ************************************ 00:07:26.063 START TEST guess_driver 00:07:26.063 ************************************ 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:26.063 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:26.063 Looking for driver=vfio-pci 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:26.063 11:07:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:27.960 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.218 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:28.219 11:07:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:31.531 11:07:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:31.531 11:07:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:31.531 11:07:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:31.531 11:07:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:31.531 11:07:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:31.531 11:07:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:31.531 11:07:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:36.801 00:07:36.801 real 0m10.685s 00:07:36.801 user 0m2.305s 00:07:36.801 sys 0m4.582s 00:07:36.801 11:07:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.801 11:07:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:36.801 ************************************ 00:07:36.801 END TEST guess_driver 00:07:36.801 ************************************ 00:07:36.801 00:07:36.801 real 0m15.379s 00:07:36.801 user 0m3.724s 00:07:36.801 sys 0m7.167s 00:07:36.801 11:07:16 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.801 11:07:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:36.801 ************************************ 00:07:36.801 END TEST driver 00:07:36.801 ************************************ 00:07:36.801 11:07:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:07:36.801 11:07:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.801 11:07:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.801 11:07:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:36.801 ************************************ 00:07:36.801 START TEST devices 00:07:36.801 ************************************ 00:07:36.801 11:07:16 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:07:36.801 * Looking for test storage... 00:07:36.801 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:07:36.801 11:07:16 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:36.801 11:07:16 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:07:36.801 11:07:16 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:36.801 11:07:16 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.801 11:07:16 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.802 11:07:16 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:07:36.802 11:07:16 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.802 11:07:16 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.802 --rc genhtml_branch_coverage=1 00:07:36.802 --rc genhtml_function_coverage=1 00:07:36.802 --rc genhtml_legend=1 00:07:36.802 --rc geninfo_all_blocks=1 00:07:36.802 --rc geninfo_unexecuted_blocks=1 00:07:36.802 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.802 ' 00:07:36.802 11:07:16 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.802 --rc genhtml_branch_coverage=1 00:07:36.802 --rc genhtml_function_coverage=1 00:07:36.802 --rc genhtml_legend=1 00:07:36.802 --rc geninfo_all_blocks=1 00:07:36.802 --rc geninfo_unexecuted_blocks=1 00:07:36.802 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.802 ' 00:07:36.802 11:07:16 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.802 --rc genhtml_branch_coverage=1 00:07:36.802 --rc genhtml_function_coverage=1 00:07:36.802 --rc genhtml_legend=1 00:07:36.802 --rc geninfo_all_blocks=1 00:07:36.802 --rc geninfo_unexecuted_blocks=1 00:07:36.802 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.802 ' 00:07:36.802 11:07:16 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.802 --rc genhtml_branch_coverage=1 00:07:36.802 --rc genhtml_function_coverage=1 00:07:36.802 --rc genhtml_legend=1 00:07:36.802 --rc geninfo_all_blocks=1 00:07:36.802 --rc geninfo_unexecuted_blocks=1 00:07:36.802 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.802 ' 00:07:36.802 11:07:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:36.802 11:07:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:36.802 11:07:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:36.802 11:07:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:07:39.330 11:07:19 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:39.330 No valid GPT data, bailing 00:07:39.330 11:07:19 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:07:39.330 11:07:19 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.330 11:07:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.330 11:07:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 ************************************ 00:07:39.330 START TEST nvme_mount 00:07:39.330 ************************************ 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:39.330 11:07:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:40.265 Creating new GPT entries in memory. 00:07:40.265 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:40.265 other utilities. 00:07:40.265 11:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:40.265 11:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:40.265 11:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:40.265 11:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:40.266 11:07:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:41.199 Creating new GPT entries in memory. 00:07:41.199 The operation has completed successfully. 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3689007 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:41.199 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.456 11:07:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:43.984 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:44.242 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:44.242 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:44.500 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:44.500 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:44.500 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:44.500 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:44.500 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:47.032 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:47.291 11:07:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.575 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:50.576 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:50.576 00:07:50.576 real 0m11.266s 00:07:50.576 user 0m3.100s 00:07:50.576 sys 0m5.989s 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.576 11:07:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:50.576 ************************************ 00:07:50.576 END TEST nvme_mount 00:07:50.576 ************************************ 00:07:50.576 11:07:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:50.576 11:07:31 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.576 11:07:31 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.576 11:07:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:50.576 ************************************ 00:07:50.576 START TEST dm_mount 00:07:50.576 ************************************ 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:50.576 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:51.514 Creating new GPT entries in memory. 00:07:51.514 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:51.514 other utilities. 00:07:51.514 11:07:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:51.514 11:07:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:51.514 11:07:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:51.514 11:07:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:51.514 11:07:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:52.893 Creating new GPT entries in memory. 00:07:52.893 The operation has completed successfully. 00:07:52.893 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:52.893 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:52.893 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:52.893 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:52.893 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:53.829 The operation has completed successfully. 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3692728 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:53.829 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:53.830 11:07:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:57.113 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:57.114 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:57.114 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:57.114 11:07:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:57.114 11:07:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:57.114 11:07:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:59.646 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:59.905 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:59.905 00:07:59.905 real 0m9.307s 00:07:59.905 user 0m2.198s 00:07:59.905 sys 0m4.163s 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.905 11:07:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:59.905 ************************************ 00:07:59.905 END TEST dm_mount 00:07:59.905 ************************************ 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:59.905 11:07:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:00.164 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:00.164 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:00.164 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:00.164 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:00.164 11:07:40 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:00.164 00:08:00.164 real 0m24.183s 00:08:00.164 user 0m6.417s 00:08:00.164 sys 0m12.409s 00:08:00.164 11:07:40 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.164 11:07:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:00.164 ************************************ 00:08:00.164 END TEST devices 00:08:00.164 ************************************ 00:08:00.164 00:08:00.164 real 1m26.086s 00:08:00.164 user 0m24.694s 00:08:00.164 sys 0m45.181s 00:08:00.164 11:07:40 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.164 11:07:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:00.164 ************************************ 00:08:00.165 END TEST setup.sh 00:08:00.165 ************************************ 00:08:00.165 11:07:40 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:08:03.447 Hugepages 00:08:03.447 node hugesize free / total 00:08:03.447 node0 1048576kB 0 / 0 00:08:03.447 node0 2048kB 1024 / 1024 00:08:03.447 node1 1048576kB 0 / 0 00:08:03.447 node1 2048kB 1024 / 1024 00:08:03.447 00:08:03.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:03.447 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:03.447 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:03.447 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:03.447 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:03.447 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:03.447 11:07:43 -- spdk/autotest.sh@117 -- # uname -s 00:08:03.447 11:07:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:03.447 11:07:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:03.447 11:07:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:05.973 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:05.973 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:09.256 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:09.256 11:07:49 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:10.189 11:07:50 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:10.189 11:07:50 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:10.189 11:07:50 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.189 11:07:50 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:10.189 11:07:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:10.189 11:07:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:10.189 11:07:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.189 11:07:50 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:10.189 11:07:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:10.446 11:07:50 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:10.446 11:07:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:08:10.446 11:07:50 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:12.975 Waiting for block devices as requested 00:08:12.975 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:13.234 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:13.234 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:13.493 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:13.493 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:13.493 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:13.493 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:13.751 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:13.751 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:13.751 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:14.009 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:14.009 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:14.009 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:14.268 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:14.268 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:14.268 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:14.526 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:14.526 11:07:55 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:14.526 11:07:55 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:08:14.526 11:07:55 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:14.526 11:07:55 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:14.526 11:07:55 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:14.526 11:07:55 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:14.526 11:07:55 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:08:14.526 11:07:55 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:14.526 11:07:55 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:14.526 11:07:55 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:14.526 11:07:55 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:14.526 11:07:55 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:14.526 11:07:55 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:14.526 11:07:55 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:14.526 11:07:55 -- common/autotest_common.sh@1541 -- # continue 00:08:14.526 11:07:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:14.526 11:07:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.526 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:08:14.526 11:07:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:14.526 11:07:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.526 11:07:55 -- common/autotest_common.sh@10 -- # set +x 00:08:14.526 11:07:55 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:17.056 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:17.056 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:17.315 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:17.315 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:17.315 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:17.315 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:20.600 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:20.600 11:08:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:20.600 11:08:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.600 11:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:20.600 11:08:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:20.600 11:08:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:20.600 11:08:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:20.600 11:08:01 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:20.600 11:08:01 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:20.600 11:08:01 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:20.600 11:08:01 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:20.600 11:08:01 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:20.600 11:08:01 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:20.600 11:08:01 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:20.600 11:08:01 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.600 11:08:01 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:20.600 11:08:01 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:20.600 11:08:01 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:20.600 11:08:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:08:20.600 11:08:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:20.600 11:08:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:20.600 11:08:01 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:08:20.600 11:08:01 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:20.600 11:08:01 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:08:20.600 11:08:01 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:08:20.600 11:08:01 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:08:20.600 11:08:01 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:08:20.600 11:08:01 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3700497 00:08:20.600 11:08:01 -- common/autotest_common.sh@1583 -- # waitforlisten 3700497 00:08:20.600 11:08:01 -- common/autotest_common.sh@831 -- # '[' -z 3700497 ']' 00:08:20.600 11:08:01 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.600 11:08:01 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.600 11:08:01 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.600 11:08:01 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.600 11:08:01 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:20.600 11:08:01 -- common/autotest_common.sh@10 -- # set +x 00:08:20.600 [2024-10-15 11:08:01.206111] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:20.600 [2024-10-15 11:08:01.206177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700497 ] 00:08:20.858 [2024-10-15 11:08:01.273440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.858 [2024-10-15 11:08:01.321711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.116 11:08:01 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.116 11:08:01 -- common/autotest_common.sh@864 -- # return 0 00:08:21.116 11:08:01 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:21.116 11:08:01 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:21.116 11:08:01 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:24.399 nvme0n1 00:08:24.399 11:08:04 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:24.399 [2024-10-15 11:08:04.710691] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:24.399 request: 00:08:24.399 { 00:08:24.399 "nvme_ctrlr_name": "nvme0", 00:08:24.399 "password": "test", 00:08:24.399 "method": "bdev_nvme_opal_revert", 00:08:24.399 "req_id": 1 00:08:24.399 } 00:08:24.399 Got JSON-RPC error response 00:08:24.399 response: 00:08:24.399 { 00:08:24.399 "code": -32602, 00:08:24.399 "message": "Invalid parameters" 00:08:24.399 } 00:08:24.399 11:08:04 -- common/autotest_common.sh@1589 -- # true 00:08:24.399 11:08:04 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:24.399 11:08:04 -- common/autotest_common.sh@1593 -- # killprocess 3700497 00:08:24.399 11:08:04 -- common/autotest_common.sh@950 -- # '[' -z 3700497 ']' 00:08:24.399 11:08:04 -- common/autotest_common.sh@954 -- # kill -0 3700497 00:08:24.399 11:08:04 -- common/autotest_common.sh@955 -- # uname 00:08:24.399 11:08:04 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.399 11:08:04 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3700497 00:08:24.399 11:08:04 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.399 11:08:04 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.399 11:08:04 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3700497' 00:08:24.399 killing process with pid 3700497 00:08:24.399 11:08:04 -- common/autotest_common.sh@969 -- # kill 3700497 00:08:24.399 11:08:04 -- common/autotest_common.sh@974 -- # wait 3700497 00:08:28.580 11:08:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:28.580 11:08:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:28.580 11:08:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:28.580 11:08:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:28.580 11:08:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:28.580 11:08:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.580 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 11:08:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:28.580 11:08:08 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:28.580 11:08:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.580 11:08:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.580 11:08:08 -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 ************************************ 00:08:28.580 START TEST env 00:08:28.580 ************************************ 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:28.580 * Looking for test storage... 00:08:28.580 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.580 11:08:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.580 11:08:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.580 11:08:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.580 11:08:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.580 11:08:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.580 11:08:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.580 11:08:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.580 11:08:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.580 11:08:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.580 11:08:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.580 11:08:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.580 11:08:08 env -- scripts/common.sh@344 -- # case "$op" in 00:08:28.580 11:08:08 env -- scripts/common.sh@345 -- # : 1 00:08:28.580 11:08:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.580 11:08:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.580 11:08:08 env -- scripts/common.sh@365 -- # decimal 1 00:08:28.580 11:08:08 env -- scripts/common.sh@353 -- # local d=1 00:08:28.580 11:08:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.580 11:08:08 env -- scripts/common.sh@355 -- # echo 1 00:08:28.580 11:08:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.580 11:08:08 env -- scripts/common.sh@366 -- # decimal 2 00:08:28.580 11:08:08 env -- scripts/common.sh@353 -- # local d=2 00:08:28.580 11:08:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.580 11:08:08 env -- scripts/common.sh@355 -- # echo 2 00:08:28.580 11:08:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.580 11:08:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.580 11:08:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.580 11:08:08 env -- scripts/common.sh@368 -- # return 0 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.580 --rc genhtml_branch_coverage=1 00:08:28.580 --rc genhtml_function_coverage=1 00:08:28.580 --rc genhtml_legend=1 00:08:28.580 --rc geninfo_all_blocks=1 00:08:28.580 --rc geninfo_unexecuted_blocks=1 00:08:28.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:28.580 ' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.580 --rc genhtml_branch_coverage=1 00:08:28.580 --rc genhtml_function_coverage=1 00:08:28.580 --rc genhtml_legend=1 00:08:28.580 --rc geninfo_all_blocks=1 00:08:28.580 --rc geninfo_unexecuted_blocks=1 00:08:28.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:28.580 ' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.580 --rc genhtml_branch_coverage=1 00:08:28.580 --rc genhtml_function_coverage=1 00:08:28.580 --rc genhtml_legend=1 00:08:28.580 --rc geninfo_all_blocks=1 00:08:28.580 --rc geninfo_unexecuted_blocks=1 00:08:28.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:28.580 ' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.580 --rc genhtml_branch_coverage=1 00:08:28.580 --rc genhtml_function_coverage=1 00:08:28.580 --rc genhtml_legend=1 00:08:28.580 --rc geninfo_all_blocks=1 00:08:28.580 --rc geninfo_unexecuted_blocks=1 00:08:28.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:28.580 ' 00:08:28.580 11:08:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.580 11:08:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.580 11:08:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 ************************************ 00:08:28.580 START TEST env_memory 00:08:28.580 ************************************ 00:08:28.580 11:08:08 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:28.580 00:08:28.580 00:08:28.580 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.580 http://cunit.sourceforge.net/ 00:08:28.580 00:08:28.580 00:08:28.580 Suite: memory 00:08:28.580 Test: alloc and free memory map ...[2024-10-15 11:08:08.988923] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:28.580 passed 00:08:28.580 Test: mem map translation ...[2024-10-15 11:08:09.002630] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:28.580 [2024-10-15 11:08:09.002646] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:28.580 [2024-10-15 11:08:09.002681] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:28.580 [2024-10-15 11:08:09.002690] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:28.580 passed 00:08:28.580 Test: mem map registration ...[2024-10-15 11:08:09.023572] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:28.580 [2024-10-15 11:08:09.023588] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:28.580 passed 00:08:28.580 Test: mem map adjacent registrations ...passed 00:08:28.580 00:08:28.580 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.580 suites 1 1 n/a 0 0 00:08:28.581 tests 4 4 4 0 0 00:08:28.581 asserts 152 152 152 0 n/a 00:08:28.581 00:08:28.581 Elapsed time = 0.093 seconds 00:08:28.581 00:08:28.581 real 0m0.106s 00:08:28.581 user 0m0.092s 00:08:28.581 sys 0m0.014s 00:08:28.581 11:08:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.581 11:08:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:28.581 ************************************ 00:08:28.581 END TEST env_memory 00:08:28.581 ************************************ 00:08:28.581 11:08:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:28.581 11:08:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.581 11:08:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.581 11:08:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:28.581 ************************************ 00:08:28.581 START TEST env_vtophys 00:08:28.581 ************************************ 00:08:28.581 11:08:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:28.581 EAL: lib.eal log level changed from notice to debug 00:08:28.581 EAL: Detected lcore 0 as core 0 on socket 0 00:08:28.581 EAL: Detected lcore 1 as core 1 on socket 0 00:08:28.581 EAL: Detected lcore 2 as core 2 on socket 0 00:08:28.581 EAL: Detected lcore 3 as core 3 on socket 0 00:08:28.581 EAL: Detected lcore 4 as core 4 on socket 0 00:08:28.581 EAL: Detected lcore 5 as core 8 on socket 0 00:08:28.581 EAL: Detected lcore 6 as core 9 on socket 0 00:08:28.581 EAL: Detected lcore 7 as core 10 on socket 0 00:08:28.581 EAL: Detected lcore 8 as core 11 on socket 0 00:08:28.581 EAL: Detected lcore 9 as core 16 on socket 0 00:08:28.581 EAL: Detected lcore 10 as core 17 on socket 0 00:08:28.581 EAL: Detected lcore 11 as core 18 on socket 0 00:08:28.581 EAL: Detected lcore 12 as core 19 on socket 0 00:08:28.581 EAL: Detected lcore 13 as core 20 on socket 0 00:08:28.581 EAL: Detected lcore 14 as core 24 on socket 0 00:08:28.581 EAL: Detected lcore 15 as core 25 on socket 0 00:08:28.581 EAL: Detected lcore 16 as core 26 on socket 0 00:08:28.581 EAL: Detected lcore 17 as core 27 on socket 0 00:08:28.581 EAL: Detected lcore 18 as core 0 on socket 1 00:08:28.581 EAL: Detected lcore 19 as core 1 on socket 1 00:08:28.581 EAL: Detected lcore 20 as core 2 on socket 1 00:08:28.581 EAL: Detected lcore 21 as core 3 on socket 1 00:08:28.581 EAL: Detected lcore 22 as core 4 on socket 1 00:08:28.581 EAL: Detected lcore 23 as core 8 on socket 1 00:08:28.581 EAL: Detected lcore 24 as core 9 on socket 1 00:08:28.581 EAL: Detected lcore 25 as core 10 on socket 1 00:08:28.581 EAL: Detected lcore 26 as core 11 on socket 1 00:08:28.581 EAL: Detected lcore 27 as core 16 on socket 1 00:08:28.581 EAL: Detected lcore 28 as core 17 on socket 1 00:08:28.581 EAL: Detected lcore 29 as core 18 on socket 1 00:08:28.581 EAL: Detected lcore 30 as core 19 on socket 1 00:08:28.581 EAL: Detected lcore 31 as core 20 on socket 1 00:08:28.581 EAL: Detected lcore 32 as core 24 on socket 1 00:08:28.581 EAL: Detected lcore 33 as core 25 on socket 1 00:08:28.581 EAL: Detected lcore 34 as core 26 on socket 1 00:08:28.581 EAL: Detected lcore 35 as core 27 on socket 1 00:08:28.581 EAL: Detected lcore 36 as core 0 on socket 0 00:08:28.581 EAL: Detected lcore 37 as core 1 on socket 0 00:08:28.581 EAL: Detected lcore 38 as core 2 on socket 0 00:08:28.581 EAL: Detected lcore 39 as core 3 on socket 0 00:08:28.581 EAL: Detected lcore 40 as core 4 on socket 0 00:08:28.581 EAL: Detected lcore 41 as core 8 on socket 0 00:08:28.581 EAL: Detected lcore 42 as core 9 on socket 0 00:08:28.581 EAL: Detected lcore 43 as core 10 on socket 0 00:08:28.581 EAL: Detected lcore 44 as core 11 on socket 0 00:08:28.581 EAL: Detected lcore 45 as core 16 on socket 0 00:08:28.581 EAL: Detected lcore 46 as core 17 on socket 0 00:08:28.581 EAL: Detected lcore 47 as core 18 on socket 0 00:08:28.581 EAL: Detected lcore 48 as core 19 on socket 0 00:08:28.581 EAL: Detected lcore 49 as core 20 on socket 0 00:08:28.581 EAL: Detected lcore 50 as core 24 on socket 0 00:08:28.581 EAL: Detected lcore 51 as core 25 on socket 0 00:08:28.581 EAL: Detected lcore 52 as core 26 on socket 0 00:08:28.581 EAL: Detected lcore 53 as core 27 on socket 0 00:08:28.581 EAL: Detected lcore 54 as core 0 on socket 1 00:08:28.581 EAL: Detected lcore 55 as core 1 on socket 1 00:08:28.581 EAL: Detected lcore 56 as core 2 on socket 1 00:08:28.581 EAL: Detected lcore 57 as core 3 on socket 1 00:08:28.581 EAL: Detected lcore 58 as core 4 on socket 1 00:08:28.581 EAL: Detected lcore 59 as core 8 on socket 1 00:08:28.581 EAL: Detected lcore 60 as core 9 on socket 1 00:08:28.581 EAL: Detected lcore 61 as core 10 on socket 1 00:08:28.581 EAL: Detected lcore 62 as core 11 on socket 1 00:08:28.581 EAL: Detected lcore 63 as core 16 on socket 1 00:08:28.581 EAL: Detected lcore 64 as core 17 on socket 1 00:08:28.581 EAL: Detected lcore 65 as core 18 on socket 1 00:08:28.581 EAL: Detected lcore 66 as core 19 on socket 1 00:08:28.581 EAL: Detected lcore 67 as core 20 on socket 1 00:08:28.581 EAL: Detected lcore 68 as core 24 on socket 1 00:08:28.581 EAL: Detected lcore 69 as core 25 on socket 1 00:08:28.581 EAL: Detected lcore 70 as core 26 on socket 1 00:08:28.581 EAL: Detected lcore 71 as core 27 on socket 1 00:08:28.581 EAL: Maximum logical cores by configuration: 128 00:08:28.581 EAL: Detected CPU lcores: 72 00:08:28.581 EAL: Detected NUMA nodes: 2 00:08:28.581 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:28.581 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:28.581 EAL: Checking presence of .so 'librte_eal.so' 00:08:28.581 EAL: Detected static linkage of DPDK 00:08:28.581 EAL: No shared files mode enabled, IPC will be disabled 00:08:28.581 EAL: Bus pci wants IOVA as 'DC' 00:08:28.581 EAL: Buses did not request a specific IOVA mode. 00:08:28.581 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:28.581 EAL: Selected IOVA mode 'VA' 00:08:28.581 EAL: Probing VFIO support... 00:08:28.581 EAL: IOMMU type 1 (Type 1) is supported 00:08:28.581 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:28.581 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:28.581 EAL: VFIO support initialized 00:08:28.581 EAL: Ask a virtual area of 0x2e000 bytes 00:08:28.581 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:28.581 EAL: Setting up physically contiguous memory... 00:08:28.581 EAL: Setting maximum number of open files to 524288 00:08:28.581 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:28.581 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:28.581 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:28.581 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:28.581 EAL: Ask a virtual area of 0x61000 bytes 00:08:28.581 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:28.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:28.581 EAL: Ask a virtual area of 0x400000000 bytes 00:08:28.581 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:28.581 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:28.581 EAL: Hugepages will be freed exactly as allocated. 00:08:28.581 EAL: No shared files mode enabled, IPC is disabled 00:08:28.581 EAL: No shared files mode enabled, IPC is disabled 00:08:28.581 EAL: TSC frequency is ~2300000 KHz 00:08:28.581 EAL: Main lcore 0 is ready (tid=7f0c7f0a4a00;cpuset=[0]) 00:08:28.581 EAL: Trying to obtain current memory policy. 00:08:28.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.581 EAL: Restoring previous memory policy: 0 00:08:28.581 EAL: request: mp_malloc_sync 00:08:28.581 EAL: No shared files mode enabled, IPC is disabled 00:08:28.581 EAL: Heap on socket 0 was expanded by 2MB 00:08:28.581 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Mem event callback 'spdk:(nil)' registered 00:08:28.839 00:08:28.839 00:08:28.839 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.839 http://cunit.sourceforge.net/ 00:08:28.839 00:08:28.839 00:08:28.839 Suite: components_suite 00:08:28.839 Test: vtophys_malloc_test ...passed 00:08:28.839 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 4MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 4MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 6MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 6MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 10MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 10MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 18MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 18MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 34MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 34MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 66MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 66MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 130MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 130MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:28.839 EAL: Restoring previous memory policy: 4 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was expanded by 258MB 00:08:28.839 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.839 EAL: request: mp_malloc_sync 00:08:28.839 EAL: No shared files mode enabled, IPC is disabled 00:08:28.839 EAL: Heap on socket 0 was shrunk by 258MB 00:08:28.839 EAL: Trying to obtain current memory policy. 00:08:28.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:29.098 EAL: Restoring previous memory policy: 4 00:08:29.098 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.098 EAL: request: mp_malloc_sync 00:08:29.098 EAL: No shared files mode enabled, IPC is disabled 00:08:29.098 EAL: Heap on socket 0 was expanded by 514MB 00:08:29.098 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.098 EAL: request: mp_malloc_sync 00:08:29.098 EAL: No shared files mode enabled, IPC is disabled 00:08:29.098 EAL: Heap on socket 0 was shrunk by 514MB 00:08:29.098 EAL: Trying to obtain current memory policy. 00:08:29.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:29.356 EAL: Restoring previous memory policy: 4 00:08:29.356 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.356 EAL: request: mp_malloc_sync 00:08:29.356 EAL: No shared files mode enabled, IPC is disabled 00:08:29.356 EAL: Heap on socket 0 was expanded by 1026MB 00:08:29.614 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.614 EAL: request: mp_malloc_sync 00:08:29.614 EAL: No shared files mode enabled, IPC is disabled 00:08:29.614 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:29.614 passed 00:08:29.614 00:08:29.614 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.614 suites 1 1 n/a 0 0 00:08:29.614 tests 2 2 2 0 0 00:08:29.614 asserts 497 497 497 0 n/a 00:08:29.614 00:08:29.614 Elapsed time = 0.982 seconds 00:08:29.614 EAL: Calling mem event callback 'spdk:(nil)' 00:08:29.614 EAL: request: mp_malloc_sync 00:08:29.614 EAL: No shared files mode enabled, IPC is disabled 00:08:29.614 EAL: Heap on socket 0 was shrunk by 2MB 00:08:29.614 EAL: No shared files mode enabled, IPC is disabled 00:08:29.614 EAL: No shared files mode enabled, IPC is disabled 00:08:29.614 EAL: No shared files mode enabled, IPC is disabled 00:08:29.614 00:08:29.614 real 0m1.105s 00:08:29.614 user 0m0.635s 00:08:29.614 sys 0m0.442s 00:08:29.614 11:08:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.614 11:08:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 ************************************ 00:08:29.614 END TEST env_vtophys 00:08:29.614 ************************************ 00:08:29.873 11:08:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.873 11:08:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.873 11:08:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.873 11:08:10 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.873 ************************************ 00:08:29.873 START TEST env_pci 00:08:29.873 ************************************ 00:08:29.873 11:08:10 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:29.873 00:08:29.873 00:08:29.873 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.873 http://cunit.sourceforge.net/ 00:08:29.873 00:08:29.873 00:08:29.873 Suite: pci 00:08:29.873 Test: pci_hook ...[2024-10-15 11:08:10.341309] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1112:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3701759 has claimed it 00:08:29.873 EAL: Cannot find device (10000:00:01.0) 00:08:29.873 EAL: Failed to attach device on primary process 00:08:29.873 passed 00:08:29.873 00:08:29.873 Run Summary: Type Total Ran Passed Failed Inactive 00:08:29.873 suites 1 1 n/a 0 0 00:08:29.873 tests 1 1 1 0 0 00:08:29.873 asserts 25 25 25 0 n/a 00:08:29.873 00:08:29.873 Elapsed time = 0.036 seconds 00:08:29.873 00:08:29.873 real 0m0.056s 00:08:29.873 user 0m0.014s 00:08:29.873 sys 0m0.042s 00:08:29.873 11:08:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.873 11:08:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:29.873 ************************************ 00:08:29.873 END TEST env_pci 00:08:29.873 ************************************ 00:08:29.873 11:08:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:29.873 11:08:10 env -- env/env.sh@15 -- # uname 00:08:29.873 11:08:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:29.873 11:08:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:29.873 11:08:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.873 11:08:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:29.873 11:08:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.873 11:08:10 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.873 ************************************ 00:08:29.873 START TEST env_dpdk_post_init 00:08:29.873 ************************************ 00:08:29.873 11:08:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:29.873 EAL: Detected CPU lcores: 72 00:08:29.873 EAL: Detected NUMA nodes: 2 00:08:29.873 EAL: Detected static linkage of DPDK 00:08:29.873 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:29.873 EAL: Selected IOVA mode 'VA' 00:08:29.873 EAL: VFIO support initialized 00:08:30.132 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:30.132 EAL: Using IOMMU type 1 (Type 1) 00:08:30.701 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:35.967 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:35.967 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:08:36.533 Starting DPDK initialization... 00:08:36.533 Starting SPDK post initialization... 00:08:36.533 SPDK NVMe probe 00:08:36.533 Attaching to 0000:5e:00.0 00:08:36.533 Attached to 0000:5e:00.0 00:08:36.533 Cleaning up... 00:08:36.533 00:08:36.533 real 0m6.501s 00:08:36.533 user 0m4.722s 00:08:36.533 sys 0m1.027s 00:08:36.533 11:08:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.533 11:08:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:36.533 ************************************ 00:08:36.533 END TEST env_dpdk_post_init 00:08:36.533 ************************************ 00:08:36.533 11:08:16 env -- env/env.sh@26 -- # uname 00:08:36.533 11:08:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:36.533 11:08:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:36.533 11:08:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.533 11:08:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.533 11:08:17 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.533 ************************************ 00:08:36.533 START TEST env_mem_callbacks 00:08:36.533 ************************************ 00:08:36.533 11:08:17 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:36.533 EAL: Detected CPU lcores: 72 00:08:36.533 EAL: Detected NUMA nodes: 2 00:08:36.533 EAL: Detected static linkage of DPDK 00:08:36.533 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:36.533 EAL: Selected IOVA mode 'VA' 00:08:36.533 EAL: VFIO support initialized 00:08:36.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:36.533 00:08:36.533 00:08:36.533 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.533 http://cunit.sourceforge.net/ 00:08:36.533 00:08:36.533 00:08:36.533 Suite: memory 00:08:36.533 Test: test ... 00:08:36.533 register 0x200000200000 2097152 00:08:36.533 malloc 3145728 00:08:36.533 register 0x200000400000 4194304 00:08:36.533 buf 0x200000500000 len 3145728 PASSED 00:08:36.533 malloc 64 00:08:36.533 buf 0x2000004fff40 len 64 PASSED 00:08:36.533 malloc 4194304 00:08:36.533 register 0x200000800000 6291456 00:08:36.533 buf 0x200000a00000 len 4194304 PASSED 00:08:36.533 free 0x200000500000 3145728 00:08:36.533 free 0x2000004fff40 64 00:08:36.533 unregister 0x200000400000 4194304 PASSED 00:08:36.533 free 0x200000a00000 4194304 00:08:36.533 unregister 0x200000800000 6291456 PASSED 00:08:36.533 malloc 8388608 00:08:36.533 register 0x200000400000 10485760 00:08:36.533 buf 0x200000600000 len 8388608 PASSED 00:08:36.533 free 0x200000600000 8388608 00:08:36.533 unregister 0x200000400000 10485760 PASSED 00:08:36.533 passed 00:08:36.533 00:08:36.533 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.533 suites 1 1 n/a 0 0 00:08:36.533 tests 1 1 1 0 0 00:08:36.533 asserts 15 15 15 0 n/a 00:08:36.533 00:08:36.533 Elapsed time = 0.005 seconds 00:08:36.533 00:08:36.533 real 0m0.067s 00:08:36.533 user 0m0.018s 00:08:36.533 sys 0m0.048s 00:08:36.533 11:08:17 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.533 11:08:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:36.533 ************************************ 00:08:36.533 END TEST env_mem_callbacks 00:08:36.533 ************************************ 00:08:36.533 00:08:36.533 real 0m8.405s 00:08:36.533 user 0m5.723s 00:08:36.533 sys 0m1.935s 00:08:36.533 11:08:17 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.533 11:08:17 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.533 ************************************ 00:08:36.533 END TEST env 00:08:36.533 ************************************ 00:08:36.832 11:08:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:36.832 11:08:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.832 11:08:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.832 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 ************************************ 00:08:36.832 START TEST rpc 00:08:36.832 ************************************ 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:36.832 * Looking for test storage... 00:08:36.832 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.832 11:08:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.832 11:08:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.832 11:08:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.832 11:08:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.832 11:08:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.832 11:08:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:36.832 11:08:17 rpc -- scripts/common.sh@345 -- # : 1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.832 11:08:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.832 11:08:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@353 -- # local d=1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.832 11:08:17 rpc -- scripts/common.sh@355 -- # echo 1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.832 11:08:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@353 -- # local d=2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.832 11:08:17 rpc -- scripts/common.sh@355 -- # echo 2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.832 11:08:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.832 11:08:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.832 11:08:17 rpc -- scripts/common.sh@368 -- # return 0 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.832 --rc genhtml_branch_coverage=1 00:08:36.832 --rc genhtml_function_coverage=1 00:08:36.832 --rc genhtml_legend=1 00:08:36.832 --rc geninfo_all_blocks=1 00:08:36.832 --rc geninfo_unexecuted_blocks=1 00:08:36.832 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:36.832 ' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.832 --rc genhtml_branch_coverage=1 00:08:36.832 --rc genhtml_function_coverage=1 00:08:36.832 --rc genhtml_legend=1 00:08:36.832 --rc geninfo_all_blocks=1 00:08:36.832 --rc geninfo_unexecuted_blocks=1 00:08:36.832 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:36.832 ' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.832 --rc genhtml_branch_coverage=1 00:08:36.832 --rc genhtml_function_coverage=1 00:08:36.832 --rc genhtml_legend=1 00:08:36.832 --rc geninfo_all_blocks=1 00:08:36.832 --rc geninfo_unexecuted_blocks=1 00:08:36.832 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:36.832 ' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.832 --rc genhtml_branch_coverage=1 00:08:36.832 --rc genhtml_function_coverage=1 00:08:36.832 --rc genhtml_legend=1 00:08:36.832 --rc geninfo_all_blocks=1 00:08:36.832 --rc geninfo_unexecuted_blocks=1 00:08:36.832 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:36.832 ' 00:08:36.832 11:08:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3702836 00:08:36.832 11:08:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:36.832 11:08:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.832 11:08:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3702836 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@831 -- # '[' -z 3702836 ']' 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.832 11:08:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 [2024-10-15 11:08:17.439280] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:36.832 [2024-10-15 11:08:17.439344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702836 ] 00:08:37.104 [2024-10-15 11:08:17.505852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.104 [2024-10-15 11:08:17.549533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:37.104 [2024-10-15 11:08:17.549580] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3702836' to capture a snapshot of events at runtime. 00:08:37.104 [2024-10-15 11:08:17.549590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.104 [2024-10-15 11:08:17.549598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.104 [2024-10-15 11:08:17.549605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3702836 for offline analysis/debug. 00:08:37.104 [2024-10-15 11:08:17.550005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.376 11:08:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.376 11:08:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:37.376 11:08:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:37.376 11:08:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:37.376 11:08:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:37.376 11:08:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:37.376 11:08:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.376 11:08:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.376 11:08:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 ************************************ 00:08:37.376 START TEST rpc_integrity 00:08:37.376 ************************************ 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:37.376 { 00:08:37.376 "name": "Malloc0", 00:08:37.376 "aliases": [ 00:08:37.376 "5e9af4b2-3fef-4f95-ab84-1e7b45254dd8" 00:08:37.376 ], 00:08:37.376 "product_name": "Malloc disk", 00:08:37.376 "block_size": 512, 00:08:37.376 "num_blocks": 16384, 00:08:37.376 "uuid": "5e9af4b2-3fef-4f95-ab84-1e7b45254dd8", 00:08:37.376 "assigned_rate_limits": { 00:08:37.376 "rw_ios_per_sec": 0, 00:08:37.376 "rw_mbytes_per_sec": 0, 00:08:37.376 "r_mbytes_per_sec": 0, 00:08:37.376 "w_mbytes_per_sec": 0 00:08:37.376 }, 00:08:37.376 "claimed": false, 00:08:37.376 "zoned": false, 00:08:37.376 "supported_io_types": { 00:08:37.376 "read": true, 00:08:37.376 "write": true, 00:08:37.376 "unmap": true, 00:08:37.376 "flush": true, 00:08:37.376 "reset": true, 00:08:37.376 "nvme_admin": false, 00:08:37.376 "nvme_io": false, 00:08:37.376 "nvme_io_md": false, 00:08:37.376 "write_zeroes": true, 00:08:37.376 "zcopy": true, 00:08:37.376 "get_zone_info": false, 00:08:37.376 "zone_management": false, 00:08:37.376 "zone_append": false, 00:08:37.376 "compare": false, 00:08:37.376 "compare_and_write": false, 00:08:37.376 "abort": true, 00:08:37.376 "seek_hole": false, 00:08:37.376 "seek_data": false, 00:08:37.376 "copy": true, 00:08:37.376 "nvme_iov_md": false 00:08:37.376 }, 00:08:37.376 "memory_domains": [ 00:08:37.376 { 00:08:37.376 "dma_device_id": "system", 00:08:37.376 "dma_device_type": 1 00:08:37.376 }, 00:08:37.376 { 00:08:37.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.376 "dma_device_type": 2 00:08:37.376 } 00:08:37.376 ], 00:08:37.376 "driver_specific": {} 00:08:37.376 } 00:08:37.376 ]' 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 [2024-10-15 11:08:17.938759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:37.376 [2024-10-15 11:08:17.938795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.376 [2024-10-15 11:08:17.938811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x472e830 00:08:37.376 [2024-10-15 11:08:17.938821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.376 [2024-10-15 11:08:17.939745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.376 [2024-10-15 11:08:17.939769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:37.376 Passthru0 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 11:08:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.376 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:37.376 { 00:08:37.376 "name": "Malloc0", 00:08:37.376 "aliases": [ 00:08:37.376 "5e9af4b2-3fef-4f95-ab84-1e7b45254dd8" 00:08:37.376 ], 00:08:37.376 "product_name": "Malloc disk", 00:08:37.376 "block_size": 512, 00:08:37.376 "num_blocks": 16384, 00:08:37.376 "uuid": "5e9af4b2-3fef-4f95-ab84-1e7b45254dd8", 00:08:37.376 "assigned_rate_limits": { 00:08:37.376 "rw_ios_per_sec": 0, 00:08:37.376 "rw_mbytes_per_sec": 0, 00:08:37.376 "r_mbytes_per_sec": 0, 00:08:37.376 "w_mbytes_per_sec": 0 00:08:37.376 }, 00:08:37.376 "claimed": true, 00:08:37.376 "claim_type": "exclusive_write", 00:08:37.376 "zoned": false, 00:08:37.376 "supported_io_types": { 00:08:37.376 "read": true, 00:08:37.376 "write": true, 00:08:37.376 "unmap": true, 00:08:37.376 "flush": true, 00:08:37.376 "reset": true, 00:08:37.376 "nvme_admin": false, 00:08:37.376 "nvme_io": false, 00:08:37.376 "nvme_io_md": false, 00:08:37.376 "write_zeroes": true, 00:08:37.376 "zcopy": true, 00:08:37.376 "get_zone_info": false, 00:08:37.376 "zone_management": false, 00:08:37.376 "zone_append": false, 00:08:37.376 "compare": false, 00:08:37.376 "compare_and_write": false, 00:08:37.376 "abort": true, 00:08:37.376 "seek_hole": false, 00:08:37.376 "seek_data": false, 00:08:37.376 "copy": true, 00:08:37.376 "nvme_iov_md": false 00:08:37.376 }, 00:08:37.376 "memory_domains": [ 00:08:37.376 { 00:08:37.376 "dma_device_id": "system", 00:08:37.376 "dma_device_type": 1 00:08:37.376 }, 00:08:37.376 { 00:08:37.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.376 "dma_device_type": 2 00:08:37.376 } 00:08:37.376 ], 00:08:37.376 "driver_specific": {} 00:08:37.376 }, 00:08:37.376 { 00:08:37.376 "name": "Passthru0", 00:08:37.376 "aliases": [ 00:08:37.376 "e286e993-2109-5122-a794-b51088380c28" 00:08:37.376 ], 00:08:37.376 "product_name": "passthru", 00:08:37.376 "block_size": 512, 00:08:37.376 "num_blocks": 16384, 00:08:37.376 "uuid": "e286e993-2109-5122-a794-b51088380c28", 00:08:37.376 "assigned_rate_limits": { 00:08:37.376 "rw_ios_per_sec": 0, 00:08:37.376 "rw_mbytes_per_sec": 0, 00:08:37.376 "r_mbytes_per_sec": 0, 00:08:37.376 "w_mbytes_per_sec": 0 00:08:37.376 }, 00:08:37.376 "claimed": false, 00:08:37.376 "zoned": false, 00:08:37.376 "supported_io_types": { 00:08:37.376 "read": true, 00:08:37.376 "write": true, 00:08:37.376 "unmap": true, 00:08:37.376 "flush": true, 00:08:37.376 "reset": true, 00:08:37.376 "nvme_admin": false, 00:08:37.376 "nvme_io": false, 00:08:37.376 "nvme_io_md": false, 00:08:37.376 "write_zeroes": true, 00:08:37.376 "zcopy": true, 00:08:37.376 "get_zone_info": false, 00:08:37.376 "zone_management": false, 00:08:37.376 "zone_append": false, 00:08:37.376 "compare": false, 00:08:37.376 "compare_and_write": false, 00:08:37.376 "abort": true, 00:08:37.376 "seek_hole": false, 00:08:37.376 "seek_data": false, 00:08:37.376 "copy": true, 00:08:37.376 "nvme_iov_md": false 00:08:37.376 }, 00:08:37.376 "memory_domains": [ 00:08:37.376 { 00:08:37.376 "dma_device_id": "system", 00:08:37.376 "dma_device_type": 1 00:08:37.376 }, 00:08:37.376 { 00:08:37.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.376 "dma_device_type": 2 00:08:37.376 } 00:08:37.377 ], 00:08:37.377 "driver_specific": { 00:08:37.377 "passthru": { 00:08:37.377 "name": "Passthru0", 00:08:37.377 "base_bdev_name": "Malloc0" 00:08:37.377 } 00:08:37.377 } 00:08:37.377 } 00:08:37.377 ]' 00:08:37.377 11:08:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:37.657 11:08:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:37.657 00:08:37.657 real 0m0.290s 00:08:37.657 user 0m0.176s 00:08:37.657 sys 0m0.050s 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 ************************************ 00:08:37.657 END TEST rpc_integrity 00:08:37.657 ************************************ 00:08:37.657 11:08:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:37.657 11:08:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.657 11:08:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.657 11:08:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 ************************************ 00:08:37.657 START TEST rpc_plugins 00:08:37.657 ************************************ 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:37.657 { 00:08:37.657 "name": "Malloc1", 00:08:37.657 "aliases": [ 00:08:37.657 "dd458ab0-ed6e-404d-971d-2e46808fe846" 00:08:37.657 ], 00:08:37.657 "product_name": "Malloc disk", 00:08:37.657 "block_size": 4096, 00:08:37.657 "num_blocks": 256, 00:08:37.657 "uuid": "dd458ab0-ed6e-404d-971d-2e46808fe846", 00:08:37.657 "assigned_rate_limits": { 00:08:37.657 "rw_ios_per_sec": 0, 00:08:37.657 "rw_mbytes_per_sec": 0, 00:08:37.657 "r_mbytes_per_sec": 0, 00:08:37.657 "w_mbytes_per_sec": 0 00:08:37.657 }, 00:08:37.657 "claimed": false, 00:08:37.657 "zoned": false, 00:08:37.657 "supported_io_types": { 00:08:37.657 "read": true, 00:08:37.657 "write": true, 00:08:37.657 "unmap": true, 00:08:37.657 "flush": true, 00:08:37.657 "reset": true, 00:08:37.657 "nvme_admin": false, 00:08:37.657 "nvme_io": false, 00:08:37.657 "nvme_io_md": false, 00:08:37.657 "write_zeroes": true, 00:08:37.657 "zcopy": true, 00:08:37.657 "get_zone_info": false, 00:08:37.657 "zone_management": false, 00:08:37.657 "zone_append": false, 00:08:37.657 "compare": false, 00:08:37.657 "compare_and_write": false, 00:08:37.657 "abort": true, 00:08:37.657 "seek_hole": false, 00:08:37.657 "seek_data": false, 00:08:37.657 "copy": true, 00:08:37.657 "nvme_iov_md": false 00:08:37.657 }, 00:08:37.657 "memory_domains": [ 00:08:37.657 { 00:08:37.657 "dma_device_id": "system", 00:08:37.657 "dma_device_type": 1 00:08:37.657 }, 00:08:37.657 { 00:08:37.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.657 "dma_device_type": 2 00:08:37.657 } 00:08:37.657 ], 00:08:37.657 "driver_specific": {} 00:08:37.657 } 00:08:37.657 ]' 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:37.657 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:37.951 11:08:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:37.951 00:08:37.951 real 0m0.143s 00:08:37.951 user 0m0.086s 00:08:37.951 sys 0m0.027s 00:08:37.951 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.951 11:08:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 ************************************ 00:08:37.951 END TEST rpc_plugins 00:08:37.951 ************************************ 00:08:37.951 11:08:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:37.951 11:08:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.951 11:08:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.951 11:08:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 ************************************ 00:08:37.951 START TEST rpc_trace_cmd_test 00:08:37.951 ************************************ 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.951 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:37.951 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3702836", 00:08:37.951 "tpoint_group_mask": "0x8", 00:08:37.951 "iscsi_conn": { 00:08:37.951 "mask": "0x2", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "scsi": { 00:08:37.951 "mask": "0x4", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "bdev": { 00:08:37.951 "mask": "0x8", 00:08:37.951 "tpoint_mask": "0xffffffffffffffff" 00:08:37.951 }, 00:08:37.951 "nvmf_rdma": { 00:08:37.951 "mask": "0x10", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "nvmf_tcp": { 00:08:37.951 "mask": "0x20", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "ftl": { 00:08:37.951 "mask": "0x40", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "blobfs": { 00:08:37.951 "mask": "0x80", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "dsa": { 00:08:37.951 "mask": "0x200", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "thread": { 00:08:37.951 "mask": "0x400", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "nvme_pcie": { 00:08:37.951 "mask": "0x800", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "iaa": { 00:08:37.951 "mask": "0x1000", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "nvme_tcp": { 00:08:37.951 "mask": "0x2000", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.951 }, 00:08:37.951 "bdev_nvme": { 00:08:37.951 "mask": "0x4000", 00:08:37.951 "tpoint_mask": "0x0" 00:08:37.952 }, 00:08:37.952 "sock": { 00:08:37.952 "mask": "0x8000", 00:08:37.952 "tpoint_mask": "0x0" 00:08:37.952 }, 00:08:37.952 "blob": { 00:08:37.952 "mask": "0x10000", 00:08:37.952 "tpoint_mask": "0x0" 00:08:37.952 }, 00:08:37.952 "bdev_raid": { 00:08:37.952 "mask": "0x20000", 00:08:37.952 "tpoint_mask": "0x0" 00:08:37.952 }, 00:08:37.952 "scheduler": { 00:08:37.952 "mask": "0x40000", 00:08:37.952 "tpoint_mask": "0x0" 00:08:37.952 } 00:08:37.952 }' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:37.952 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:38.224 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:38.224 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:38.224 11:08:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:38.224 00:08:38.224 real 0m0.240s 00:08:38.224 user 0m0.188s 00:08:38.225 sys 0m0.041s 00:08:38.225 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 ************************************ 00:08:38.225 END TEST rpc_trace_cmd_test 00:08:38.225 ************************************ 00:08:38.225 11:08:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:38.225 11:08:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:38.225 11:08:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:38.225 11:08:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.225 11:08:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.225 11:08:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 ************************************ 00:08:38.225 START TEST rpc_daemon_integrity 00:08:38.225 ************************************ 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:38.225 { 00:08:38.225 "name": "Malloc2", 00:08:38.225 "aliases": [ 00:08:38.225 "4aea9196-de95-491f-a7e0-0b00976d36a3" 00:08:38.225 ], 00:08:38.225 "product_name": "Malloc disk", 00:08:38.225 "block_size": 512, 00:08:38.225 "num_blocks": 16384, 00:08:38.225 "uuid": "4aea9196-de95-491f-a7e0-0b00976d36a3", 00:08:38.225 "assigned_rate_limits": { 00:08:38.225 "rw_ios_per_sec": 0, 00:08:38.225 "rw_mbytes_per_sec": 0, 00:08:38.225 "r_mbytes_per_sec": 0, 00:08:38.225 "w_mbytes_per_sec": 0 00:08:38.225 }, 00:08:38.225 "claimed": false, 00:08:38.225 "zoned": false, 00:08:38.225 "supported_io_types": { 00:08:38.225 "read": true, 00:08:38.225 "write": true, 00:08:38.225 "unmap": true, 00:08:38.225 "flush": true, 00:08:38.225 "reset": true, 00:08:38.225 "nvme_admin": false, 00:08:38.225 "nvme_io": false, 00:08:38.225 "nvme_io_md": false, 00:08:38.225 "write_zeroes": true, 00:08:38.225 "zcopy": true, 00:08:38.225 "get_zone_info": false, 00:08:38.225 "zone_management": false, 00:08:38.225 "zone_append": false, 00:08:38.225 "compare": false, 00:08:38.225 "compare_and_write": false, 00:08:38.225 "abort": true, 00:08:38.225 "seek_hole": false, 00:08:38.225 "seek_data": false, 00:08:38.225 "copy": true, 00:08:38.225 "nvme_iov_md": false 00:08:38.225 }, 00:08:38.225 "memory_domains": [ 00:08:38.225 { 00:08:38.225 "dma_device_id": "system", 00:08:38.225 "dma_device_type": 1 00:08:38.225 }, 00:08:38.225 { 00:08:38.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.225 "dma_device_type": 2 00:08:38.225 } 00:08:38.225 ], 00:08:38.225 "driver_specific": {} 00:08:38.225 } 00:08:38.225 ]' 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.225 [2024-10-15 11:08:18.829071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:38.225 [2024-10-15 11:08:18.829106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.225 [2024-10-15 11:08:18.829122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4850bf0 00:08:38.225 [2024-10-15 11:08:18.829131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.225 [2024-10-15 11:08:18.830008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.225 [2024-10-15 11:08:18.830040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:38.225 Passthru0 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.225 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.483 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.483 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:38.483 { 00:08:38.483 "name": "Malloc2", 00:08:38.483 "aliases": [ 00:08:38.483 "4aea9196-de95-491f-a7e0-0b00976d36a3" 00:08:38.483 ], 00:08:38.483 "product_name": "Malloc disk", 00:08:38.483 "block_size": 512, 00:08:38.483 "num_blocks": 16384, 00:08:38.483 "uuid": "4aea9196-de95-491f-a7e0-0b00976d36a3", 00:08:38.483 "assigned_rate_limits": { 00:08:38.483 "rw_ios_per_sec": 0, 00:08:38.483 "rw_mbytes_per_sec": 0, 00:08:38.483 "r_mbytes_per_sec": 0, 00:08:38.483 "w_mbytes_per_sec": 0 00:08:38.483 }, 00:08:38.483 "claimed": true, 00:08:38.483 "claim_type": "exclusive_write", 00:08:38.483 "zoned": false, 00:08:38.483 "supported_io_types": { 00:08:38.483 "read": true, 00:08:38.483 "write": true, 00:08:38.483 "unmap": true, 00:08:38.483 "flush": true, 00:08:38.483 "reset": true, 00:08:38.483 "nvme_admin": false, 00:08:38.483 "nvme_io": false, 00:08:38.483 "nvme_io_md": false, 00:08:38.483 "write_zeroes": true, 00:08:38.483 "zcopy": true, 00:08:38.483 "get_zone_info": false, 00:08:38.483 "zone_management": false, 00:08:38.483 "zone_append": false, 00:08:38.483 "compare": false, 00:08:38.483 "compare_and_write": false, 00:08:38.483 "abort": true, 00:08:38.483 "seek_hole": false, 00:08:38.483 "seek_data": false, 00:08:38.483 "copy": true, 00:08:38.483 "nvme_iov_md": false 00:08:38.483 }, 00:08:38.483 "memory_domains": [ 00:08:38.483 { 00:08:38.483 "dma_device_id": "system", 00:08:38.483 "dma_device_type": 1 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.483 "dma_device_type": 2 00:08:38.483 } 00:08:38.483 ], 00:08:38.483 "driver_specific": {} 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "name": "Passthru0", 00:08:38.483 "aliases": [ 00:08:38.483 "93141a76-0c3d-5f52-9c74-eac9de02380d" 00:08:38.483 ], 00:08:38.483 "product_name": "passthru", 00:08:38.483 "block_size": 512, 00:08:38.483 "num_blocks": 16384, 00:08:38.483 "uuid": "93141a76-0c3d-5f52-9c74-eac9de02380d", 00:08:38.483 "assigned_rate_limits": { 00:08:38.483 "rw_ios_per_sec": 0, 00:08:38.483 "rw_mbytes_per_sec": 0, 00:08:38.483 "r_mbytes_per_sec": 0, 00:08:38.483 "w_mbytes_per_sec": 0 00:08:38.483 }, 00:08:38.483 "claimed": false, 00:08:38.483 "zoned": false, 00:08:38.483 "supported_io_types": { 00:08:38.483 "read": true, 00:08:38.483 "write": true, 00:08:38.483 "unmap": true, 00:08:38.483 "flush": true, 00:08:38.483 "reset": true, 00:08:38.483 "nvme_admin": false, 00:08:38.483 "nvme_io": false, 00:08:38.484 "nvme_io_md": false, 00:08:38.484 "write_zeroes": true, 00:08:38.484 "zcopy": true, 00:08:38.484 "get_zone_info": false, 00:08:38.484 "zone_management": false, 00:08:38.484 "zone_append": false, 00:08:38.484 "compare": false, 00:08:38.484 "compare_and_write": false, 00:08:38.484 "abort": true, 00:08:38.484 "seek_hole": false, 00:08:38.484 "seek_data": false, 00:08:38.484 "copy": true, 00:08:38.484 "nvme_iov_md": false 00:08:38.484 }, 00:08:38.484 "memory_domains": [ 00:08:38.484 { 00:08:38.484 "dma_device_id": "system", 00:08:38.484 "dma_device_type": 1 00:08:38.484 }, 00:08:38.484 { 00:08:38.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.484 "dma_device_type": 2 00:08:38.484 } 00:08:38.484 ], 00:08:38.484 "driver_specific": { 00:08:38.484 "passthru": { 00:08:38.484 "name": "Passthru0", 00:08:38.484 "base_bdev_name": "Malloc2" 00:08:38.484 } 00:08:38.484 } 00:08:38.484 } 00:08:38.484 ]' 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:38.484 00:08:38.484 real 0m0.290s 00:08:38.484 user 0m0.175s 00:08:38.484 sys 0m0.056s 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.484 11:08:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:38.484 ************************************ 00:08:38.484 END TEST rpc_daemon_integrity 00:08:38.484 ************************************ 00:08:38.484 11:08:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:38.484 11:08:19 rpc -- rpc/rpc.sh@84 -- # killprocess 3702836 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@950 -- # '[' -z 3702836 ']' 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@954 -- # kill -0 3702836 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@955 -- # uname 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3702836 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3702836' 00:08:38.484 killing process with pid 3702836 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@969 -- # kill 3702836 00:08:38.484 11:08:19 rpc -- common/autotest_common.sh@974 -- # wait 3702836 00:08:38.742 00:08:38.742 real 0m2.139s 00:08:38.742 user 0m2.696s 00:08:38.742 sys 0m0.807s 00:08:38.742 11:08:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.742 11:08:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.743 ************************************ 00:08:38.743 END TEST rpc 00:08:38.743 ************************************ 00:08:39.001 11:08:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:39.001 11:08:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.001 11:08:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.001 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.001 ************************************ 00:08:39.001 START TEST skip_rpc 00:08:39.001 ************************************ 00:08:39.001 11:08:19 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:39.001 * Looking for test storage... 00:08:39.001 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:39.001 11:08:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.001 11:08:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.001 11:08:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.001 11:08:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:39.001 11:08:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.261 11:08:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.261 --rc genhtml_branch_coverage=1 00:08:39.261 --rc genhtml_function_coverage=1 00:08:39.261 --rc genhtml_legend=1 00:08:39.261 --rc geninfo_all_blocks=1 00:08:39.261 --rc geninfo_unexecuted_blocks=1 00:08:39.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.261 ' 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.261 --rc genhtml_branch_coverage=1 00:08:39.261 --rc genhtml_function_coverage=1 00:08:39.261 --rc genhtml_legend=1 00:08:39.261 --rc geninfo_all_blocks=1 00:08:39.261 --rc geninfo_unexecuted_blocks=1 00:08:39.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.261 ' 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.261 --rc genhtml_branch_coverage=1 00:08:39.261 --rc genhtml_function_coverage=1 00:08:39.261 --rc genhtml_legend=1 00:08:39.261 --rc geninfo_all_blocks=1 00:08:39.261 --rc geninfo_unexecuted_blocks=1 00:08:39.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.261 ' 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.261 --rc genhtml_branch_coverage=1 00:08:39.261 --rc genhtml_function_coverage=1 00:08:39.261 --rc genhtml_legend=1 00:08:39.261 --rc geninfo_all_blocks=1 00:08:39.261 --rc geninfo_unexecuted_blocks=1 00:08:39.261 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.261 ' 00:08:39.261 11:08:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:39.261 11:08:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:39.261 11:08:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.261 11:08:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.261 ************************************ 00:08:39.261 START TEST skip_rpc 00:08:39.261 ************************************ 00:08:39.261 11:08:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:39.261 11:08:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3703317 00:08:39.261 11:08:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.261 11:08:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:39.261 11:08:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:39.261 [2024-10-15 11:08:19.700164] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:39.261 [2024-10-15 11:08:19.700220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703317 ] 00:08:39.261 [2024-10-15 11:08:19.768050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.261 [2024-10-15 11:08:19.815639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3703317 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3703317 ']' 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3703317 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3703317 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3703317' 00:08:44.541 killing process with pid 3703317 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3703317 00:08:44.541 11:08:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3703317 00:08:44.541 00:08:44.541 real 0m5.374s 00:08:44.541 user 0m5.122s 00:08:44.541 sys 0m0.294s 00:08:44.541 11:08:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.541 11:08:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.541 ************************************ 00:08:44.541 END TEST skip_rpc 00:08:44.541 ************************************ 00:08:44.541 11:08:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:44.541 11:08:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.541 11:08:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.541 11:08:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.541 ************************************ 00:08:44.541 START TEST skip_rpc_with_json 00:08:44.541 ************************************ 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3704104 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3704104 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3704104 ']' 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.541 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:44.541 [2024-10-15 11:08:25.160560] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:44.541 [2024-10-15 11:08:25.160639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704104 ] 00:08:44.801 [2024-10-15 11:08:25.229034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.801 [2024-10-15 11:08:25.272437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:45.061 [2024-10-15 11:08:25.488931] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:45.061 request: 00:08:45.061 { 00:08:45.061 "trtype": "tcp", 00:08:45.061 "method": "nvmf_get_transports", 00:08:45.061 "req_id": 1 00:08:45.061 } 00:08:45.061 Got JSON-RPC error response 00:08:45.061 response: 00:08:45.061 { 00:08:45.061 "code": -19, 00:08:45.061 "message": "No such device" 00:08:45.061 } 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:45.061 [2024-10-15 11:08:25.497007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.061 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:45.061 { 00:08:45.061 "subsystems": [ 00:08:45.061 { 00:08:45.061 "subsystem": "scheduler", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "framework_set_scheduler", 00:08:45.061 "params": { 00:08:45.061 "name": "static" 00:08:45.061 } 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "vmd", 00:08:45.061 "config": [] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "sock", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "sock_set_default_impl", 00:08:45.061 "params": { 00:08:45.061 "impl_name": "posix" 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "sock_impl_set_options", 00:08:45.061 "params": { 00:08:45.061 "impl_name": "ssl", 00:08:45.061 "recv_buf_size": 4096, 00:08:45.061 "send_buf_size": 4096, 00:08:45.061 "enable_recv_pipe": true, 00:08:45.061 "enable_quickack": false, 00:08:45.061 "enable_placement_id": 0, 00:08:45.061 "enable_zerocopy_send_server": true, 00:08:45.061 "enable_zerocopy_send_client": false, 00:08:45.061 "zerocopy_threshold": 0, 00:08:45.061 "tls_version": 0, 00:08:45.061 "enable_ktls": false 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "sock_impl_set_options", 00:08:45.061 "params": { 00:08:45.061 "impl_name": "posix", 00:08:45.061 "recv_buf_size": 2097152, 00:08:45.061 "send_buf_size": 2097152, 00:08:45.061 "enable_recv_pipe": true, 00:08:45.061 "enable_quickack": false, 00:08:45.061 "enable_placement_id": 0, 00:08:45.061 "enable_zerocopy_send_server": true, 00:08:45.061 "enable_zerocopy_send_client": false, 00:08:45.061 "zerocopy_threshold": 0, 00:08:45.061 "tls_version": 0, 00:08:45.061 "enable_ktls": false 00:08:45.061 } 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "iobuf", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "iobuf_set_options", 00:08:45.061 "params": { 00:08:45.061 "small_pool_count": 8192, 00:08:45.061 "large_pool_count": 1024, 00:08:45.061 "small_bufsize": 8192, 00:08:45.061 "large_bufsize": 135168 00:08:45.061 } 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "keyring", 00:08:45.061 "config": [] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "vfio_user_target", 00:08:45.061 "config": null 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "fsdev", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "fsdev_set_opts", 00:08:45.061 "params": { 00:08:45.061 "fsdev_io_pool_size": 65535, 00:08:45.061 "fsdev_io_cache_size": 256 00:08:45.061 } 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "accel", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "accel_set_options", 00:08:45.061 "params": { 00:08:45.061 "small_cache_size": 128, 00:08:45.061 "large_cache_size": 16, 00:08:45.061 "task_count": 2048, 00:08:45.061 "sequence_count": 2048, 00:08:45.061 "buf_count": 2048 00:08:45.061 } 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "bdev", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "bdev_set_options", 00:08:45.061 "params": { 00:08:45.061 "bdev_io_pool_size": 65535, 00:08:45.061 "bdev_io_cache_size": 256, 00:08:45.061 "bdev_auto_examine": true, 00:08:45.061 "iobuf_small_cache_size": 128, 00:08:45.061 "iobuf_large_cache_size": 16 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "bdev_raid_set_options", 00:08:45.061 "params": { 00:08:45.061 "process_window_size_kb": 1024, 00:08:45.061 "process_max_bandwidth_mb_sec": 0 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "bdev_nvme_set_options", 00:08:45.061 "params": { 00:08:45.061 "action_on_timeout": "none", 00:08:45.061 "timeout_us": 0, 00:08:45.061 "timeout_admin_us": 0, 00:08:45.061 "keep_alive_timeout_ms": 10000, 00:08:45.061 "arbitration_burst": 0, 00:08:45.061 "low_priority_weight": 0, 00:08:45.061 "medium_priority_weight": 0, 00:08:45.061 "high_priority_weight": 0, 00:08:45.061 "nvme_adminq_poll_period_us": 10000, 00:08:45.061 "nvme_ioq_poll_period_us": 0, 00:08:45.061 "io_queue_requests": 0, 00:08:45.061 "delay_cmd_submit": true, 00:08:45.061 "transport_retry_count": 4, 00:08:45.061 "bdev_retry_count": 3, 00:08:45.061 "transport_ack_timeout": 0, 00:08:45.061 "ctrlr_loss_timeout_sec": 0, 00:08:45.061 "reconnect_delay_sec": 0, 00:08:45.061 "fast_io_fail_timeout_sec": 0, 00:08:45.061 "disable_auto_failback": false, 00:08:45.061 "generate_uuids": false, 00:08:45.061 "transport_tos": 0, 00:08:45.061 "nvme_error_stat": false, 00:08:45.061 "rdma_srq_size": 0, 00:08:45.061 "io_path_stat": false, 00:08:45.061 "allow_accel_sequence": false, 00:08:45.061 "rdma_max_cq_size": 0, 00:08:45.061 "rdma_cm_event_timeout_ms": 0, 00:08:45.061 "dhchap_digests": [ 00:08:45.061 "sha256", 00:08:45.061 "sha384", 00:08:45.061 "sha512" 00:08:45.061 ], 00:08:45.061 "dhchap_dhgroups": [ 00:08:45.061 "null", 00:08:45.061 "ffdhe2048", 00:08:45.061 "ffdhe3072", 00:08:45.061 "ffdhe4096", 00:08:45.061 "ffdhe6144", 00:08:45.061 "ffdhe8192" 00:08:45.061 ] 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "bdev_nvme_set_hotplug", 00:08:45.061 "params": { 00:08:45.061 "period_us": 100000, 00:08:45.061 "enable": false 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "bdev_iscsi_set_options", 00:08:45.061 "params": { 00:08:45.061 "timeout_sec": 30 00:08:45.061 } 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "method": "bdev_wait_for_examine" 00:08:45.061 } 00:08:45.061 ] 00:08:45.061 }, 00:08:45.061 { 00:08:45.061 "subsystem": "nvmf", 00:08:45.061 "config": [ 00:08:45.061 { 00:08:45.061 "method": "nvmf_set_config", 00:08:45.061 "params": { 00:08:45.061 "discovery_filter": "match_any", 00:08:45.061 "admin_cmd_passthru": { 00:08:45.061 "identify_ctrlr": false 00:08:45.061 }, 00:08:45.061 "dhchap_digests": [ 00:08:45.061 "sha256", 00:08:45.061 "sha384", 00:08:45.061 "sha512" 00:08:45.062 ], 00:08:45.062 "dhchap_dhgroups": [ 00:08:45.062 "null", 00:08:45.062 "ffdhe2048", 00:08:45.062 "ffdhe3072", 00:08:45.062 "ffdhe4096", 00:08:45.062 "ffdhe6144", 00:08:45.062 "ffdhe8192" 00:08:45.062 ] 00:08:45.062 } 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "method": "nvmf_set_max_subsystems", 00:08:45.062 "params": { 00:08:45.062 "max_subsystems": 1024 00:08:45.062 } 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "method": "nvmf_set_crdt", 00:08:45.062 "params": { 00:08:45.062 "crdt1": 0, 00:08:45.062 "crdt2": 0, 00:08:45.062 "crdt3": 0 00:08:45.062 } 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "method": "nvmf_create_transport", 00:08:45.062 "params": { 00:08:45.062 "trtype": "TCP", 00:08:45.062 "max_queue_depth": 128, 00:08:45.062 "max_io_qpairs_per_ctrlr": 127, 00:08:45.062 "in_capsule_data_size": 4096, 00:08:45.062 "max_io_size": 131072, 00:08:45.062 "io_unit_size": 131072, 00:08:45.062 "max_aq_depth": 128, 00:08:45.062 "num_shared_buffers": 511, 00:08:45.062 "buf_cache_size": 4294967295, 00:08:45.062 "dif_insert_or_strip": false, 00:08:45.062 "zcopy": false, 00:08:45.062 "c2h_success": true, 00:08:45.062 "sock_priority": 0, 00:08:45.062 "abort_timeout_sec": 1, 00:08:45.062 "ack_timeout": 0, 00:08:45.062 "data_wr_pool_size": 0 00:08:45.062 } 00:08:45.062 } 00:08:45.062 ] 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "nbd", 00:08:45.062 "config": [] 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "ublk", 00:08:45.062 "config": [] 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "vhost_blk", 00:08:45.062 "config": [] 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "scsi", 00:08:45.062 "config": null 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "iscsi", 00:08:45.062 "config": [ 00:08:45.062 { 00:08:45.062 "method": "iscsi_set_options", 00:08:45.062 "params": { 00:08:45.062 "node_base": "iqn.2016-06.io.spdk", 00:08:45.062 "max_sessions": 128, 00:08:45.062 "max_connections_per_session": 2, 00:08:45.062 "max_queue_depth": 64, 00:08:45.062 "default_time2wait": 2, 00:08:45.062 "default_time2retain": 20, 00:08:45.062 "first_burst_length": 8192, 00:08:45.062 "immediate_data": true, 00:08:45.062 "allow_duplicated_isid": false, 00:08:45.062 "error_recovery_level": 0, 00:08:45.062 "nop_timeout": 60, 00:08:45.062 "nop_in_interval": 30, 00:08:45.062 "disable_chap": false, 00:08:45.062 "require_chap": false, 00:08:45.062 "mutual_chap": false, 00:08:45.062 "chap_group": 0, 00:08:45.062 "max_large_datain_per_connection": 64, 00:08:45.062 "max_r2t_per_connection": 4, 00:08:45.062 "pdu_pool_size": 36864, 00:08:45.062 "immediate_data_pool_size": 16384, 00:08:45.062 "data_out_pool_size": 2048 00:08:45.062 } 00:08:45.062 } 00:08:45.062 ] 00:08:45.062 }, 00:08:45.062 { 00:08:45.062 "subsystem": "vhost_scsi", 00:08:45.062 "config": [] 00:08:45.062 } 00:08:45.062 ] 00:08:45.062 } 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3704104 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3704104 ']' 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3704104 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.062 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3704104 00:08:45.321 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.321 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.321 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3704104' 00:08:45.321 killing process with pid 3704104 00:08:45.321 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3704104 00:08:45.321 11:08:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3704104 00:08:45.579 11:08:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3704126 00:08:45.579 11:08:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:45.579 11:08:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:50.854 11:08:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3704126 00:08:50.854 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3704126 ']' 00:08:50.854 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3704126 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3704126 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3704126' 00:08:50.855 killing process with pid 3704126 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3704126 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3704126 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:50.855 00:08:50.855 real 0m6.234s 00:08:50.855 user 0m5.919s 00:08:50.855 sys 0m0.594s 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 ************************************ 00:08:50.855 END TEST skip_rpc_with_json 00:08:50.855 ************************************ 00:08:50.855 11:08:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:50.855 11:08:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.855 11:08:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.855 11:08:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 ************************************ 00:08:50.855 START TEST skip_rpc_with_delay 00:08:50.855 ************************************ 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:50.855 [2024-10-15 11:08:31.468482] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.855 00:08:50.855 real 0m0.039s 00:08:50.855 user 0m0.020s 00:08:50.855 sys 0m0.019s 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.855 11:08:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 ************************************ 00:08:50.855 END TEST skip_rpc_with_delay 00:08:50.855 ************************************ 00:08:51.114 11:08:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:51.114 11:08:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:51.114 11:08:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:51.114 11:08:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.114 11:08:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.114 11:08:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 ************************************ 00:08:51.114 START TEST exit_on_failed_rpc_init 00:08:51.114 ************************************ 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3704935 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3704935 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3704935 ']' 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 11:08:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.114 [2024-10-15 11:08:31.592630] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:51.114 [2024-10-15 11:08:31.592690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704935 ] 00:08:51.114 [2024-10-15 11:08:31.660634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.114 [2024-10-15 11:08:31.708963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:51.374 11:08:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:51.374 [2024-10-15 11:08:31.949298] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:51.374 [2024-10-15 11:08:31.949370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705047 ] 00:08:51.633 [2024-10-15 11:08:32.018044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.633 [2024-10-15 11:08:32.064043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.633 [2024-10-15 11:08:32.064132] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:51.633 [2024-10-15 11:08:32.064144] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:51.633 [2024-10-15 11:08:32.064152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3704935 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3704935 ']' 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3704935 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3704935 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3704935' 00:08:51.633 killing process with pid 3704935 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3704935 00:08:51.633 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3704935 00:08:51.892 00:08:51.892 real 0m0.886s 00:08:51.892 user 0m0.914s 00:08:51.892 sys 0m0.384s 00:08:51.892 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.892 11:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:51.892 ************************************ 00:08:51.892 END TEST exit_on_failed_rpc_init 00:08:51.892 ************************************ 00:08:51.892 11:08:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:51.892 00:08:51.892 real 0m13.047s 00:08:51.892 user 0m12.179s 00:08:51.892 sys 0m1.636s 00:08:51.892 11:08:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.892 11:08:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.892 ************************************ 00:08:51.892 END TEST skip_rpc 00:08:51.892 ************************************ 00:08:52.151 11:08:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:52.151 11:08:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.151 11:08:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.151 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:52.151 ************************************ 00:08:52.151 START TEST rpc_client 00:08:52.151 ************************************ 00:08:52.151 11:08:32 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:52.151 * Looking for test storage... 00:08:52.151 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:08:52.151 11:08:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.151 11:08:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.151 11:08:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.151 11:08:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.151 11:08:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.152 11:08:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.152 --rc genhtml_branch_coverage=1 00:08:52.152 --rc genhtml_function_coverage=1 00:08:52.152 --rc genhtml_legend=1 00:08:52.152 --rc geninfo_all_blocks=1 00:08:52.152 --rc geninfo_unexecuted_blocks=1 00:08:52.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.152 ' 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.152 --rc genhtml_branch_coverage=1 00:08:52.152 --rc genhtml_function_coverage=1 00:08:52.152 --rc genhtml_legend=1 00:08:52.152 --rc geninfo_all_blocks=1 00:08:52.152 --rc geninfo_unexecuted_blocks=1 00:08:52.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.152 ' 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.152 --rc genhtml_branch_coverage=1 00:08:52.152 --rc genhtml_function_coverage=1 00:08:52.152 --rc genhtml_legend=1 00:08:52.152 --rc geninfo_all_blocks=1 00:08:52.152 --rc geninfo_unexecuted_blocks=1 00:08:52.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.152 ' 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.152 --rc genhtml_branch_coverage=1 00:08:52.152 --rc genhtml_function_coverage=1 00:08:52.152 --rc genhtml_legend=1 00:08:52.152 --rc geninfo_all_blocks=1 00:08:52.152 --rc geninfo_unexecuted_blocks=1 00:08:52.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.152 ' 00:08:52.152 11:08:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:52.152 OK 00:08:52.152 11:08:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:52.152 00:08:52.152 real 0m0.205s 00:08:52.152 user 0m0.114s 00:08:52.152 sys 0m0.107s 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.152 11:08:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:52.152 ************************************ 00:08:52.152 END TEST rpc_client 00:08:52.152 ************************************ 00:08:52.412 11:08:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:52.412 11:08:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.412 11:08:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.412 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:52.412 ************************************ 00:08:52.412 START TEST json_config 00:08:52.412 ************************************ 00:08:52.412 11:08:32 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:52.412 11:08:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.412 11:08:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.412 11:08:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.412 11:08:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.412 11:08:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.412 11:08:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.412 11:08:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.412 11:08:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.412 11:08:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.412 11:08:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.412 11:08:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.412 11:08:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.412 11:08:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.412 11:08:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:52.412 11:08:32 json_config -- scripts/common.sh@345 -- # : 1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.412 11:08:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.412 11:08:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@353 -- # local d=1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.412 11:08:32 json_config -- scripts/common.sh@355 -- # echo 1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.412 11:08:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:52.412 11:08:32 json_config -- scripts/common.sh@353 -- # local d=2 00:08:52.412 11:08:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.412 11:08:33 json_config -- scripts/common.sh@355 -- # echo 2 00:08:52.412 11:08:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.412 11:08:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.412 11:08:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.412 11:08:33 json_config -- scripts/common.sh@368 -- # return 0 00:08:52.412 11:08:33 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.412 11:08:33 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.412 --rc genhtml_branch_coverage=1 00:08:52.412 --rc genhtml_function_coverage=1 00:08:52.412 --rc genhtml_legend=1 00:08:52.412 --rc geninfo_all_blocks=1 00:08:52.412 --rc geninfo_unexecuted_blocks=1 00:08:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.412 ' 00:08:52.412 11:08:33 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.412 --rc genhtml_branch_coverage=1 00:08:52.412 --rc genhtml_function_coverage=1 00:08:52.412 --rc genhtml_legend=1 00:08:52.412 --rc geninfo_all_blocks=1 00:08:52.412 --rc geninfo_unexecuted_blocks=1 00:08:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.412 ' 00:08:52.412 11:08:33 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.412 --rc genhtml_branch_coverage=1 00:08:52.412 --rc genhtml_function_coverage=1 00:08:52.412 --rc genhtml_legend=1 00:08:52.412 --rc geninfo_all_blocks=1 00:08:52.412 --rc geninfo_unexecuted_blocks=1 00:08:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.412 ' 00:08:52.412 11:08:33 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.412 --rc genhtml_branch_coverage=1 00:08:52.412 --rc genhtml_function_coverage=1 00:08:52.412 --rc genhtml_legend=1 00:08:52.412 --rc geninfo_all_blocks=1 00:08:52.412 --rc geninfo_unexecuted_blocks=1 00:08:52.412 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.412 ' 00:08:52.412 11:08:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:52.412 11:08:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:52.413 11:08:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.413 11:08:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.413 11:08:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.413 11:08:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.413 11:08:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.413 11:08:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.413 11:08:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.413 11:08:33 json_config -- paths/export.sh@5 -- # export PATH 00:08:52.413 11:08:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@51 -- # : 0 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.413 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.413 11:08:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:52.413 WARNING: No tests are enabled so not running JSON configuration tests 00:08:52.413 11:08:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:52.413 00:08:52.413 real 0m0.190s 00:08:52.413 user 0m0.113s 00:08:52.413 sys 0m0.084s 00:08:52.413 11:08:33 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.671 11:08:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.671 ************************************ 00:08:52.671 END TEST json_config 00:08:52.671 ************************************ 00:08:52.671 11:08:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:52.671 11:08:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.671 11:08:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.671 11:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:52.671 ************************************ 00:08:52.671 START TEST json_config_extra_key 00:08:52.671 ************************************ 00:08:52.671 11:08:33 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:52.671 11:08:33 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.671 11:08:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.671 11:08:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.671 11:08:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.671 11:08:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.672 11:08:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:52.672 11:08:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.672 11:08:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.672 --rc genhtml_branch_coverage=1 00:08:52.672 --rc genhtml_function_coverage=1 00:08:52.672 --rc genhtml_legend=1 00:08:52.672 --rc geninfo_all_blocks=1 00:08:52.672 --rc geninfo_unexecuted_blocks=1 00:08:52.672 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.672 ' 00:08:52.672 11:08:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.672 --rc genhtml_branch_coverage=1 00:08:52.672 --rc genhtml_function_coverage=1 00:08:52.672 --rc genhtml_legend=1 00:08:52.672 --rc geninfo_all_blocks=1 00:08:52.672 --rc geninfo_unexecuted_blocks=1 00:08:52.672 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.672 ' 00:08:52.672 11:08:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.672 --rc genhtml_branch_coverage=1 00:08:52.672 --rc genhtml_function_coverage=1 00:08:52.672 --rc genhtml_legend=1 00:08:52.672 --rc geninfo_all_blocks=1 00:08:52.672 --rc geninfo_unexecuted_blocks=1 00:08:52.672 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.672 ' 00:08:52.672 11:08:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.672 --rc genhtml_branch_coverage=1 00:08:52.672 --rc genhtml_function_coverage=1 00:08:52.672 --rc genhtml_legend=1 00:08:52.672 --rc geninfo_all_blocks=1 00:08:52.672 --rc geninfo_unexecuted_blocks=1 00:08:52.672 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:52.672 ' 00:08:52.672 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:52.672 11:08:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:52.930 11:08:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.930 11:08:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.930 11:08:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.930 11:08:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.930 11:08:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.930 11:08:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.930 11:08:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.930 11:08:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:52.930 11:08:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.930 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.930 11:08:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:52.930 INFO: launching applications... 00:08:52.930 11:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:52.930 11:08:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:52.931 11:08:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:52.931 11:08:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3705393 00:08:52.931 11:08:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:52.931 Waiting for target to run... 00:08:52.931 11:08:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3705393 /var/tmp/spdk_tgt.sock 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3705393 ']' 00:08:52.931 11:08:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:52.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.931 11:08:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:52.931 [2024-10-15 11:08:33.341196] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:52.931 [2024-10-15 11:08:33.341260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705393 ] 00:08:53.189 [2024-10-15 11:08:33.781004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.447 [2024-10-15 11:08:33.835289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.706 11:08:34 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.706 11:08:34 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:53.706 00:08:53.706 11:08:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:53.706 INFO: shutting down applications... 00:08:53.706 11:08:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3705393 ]] 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3705393 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705393 00:08:53.706 11:08:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705393 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:54.275 11:08:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:54.275 SPDK target shutdown done 00:08:54.275 11:08:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:54.275 Success 00:08:54.275 00:08:54.275 real 0m1.589s 00:08:54.275 user 0m1.183s 00:08:54.275 sys 0m0.592s 00:08:54.275 11:08:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.275 11:08:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:54.275 ************************************ 00:08:54.275 END TEST json_config_extra_key 00:08:54.275 ************************************ 00:08:54.275 11:08:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.275 11:08:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.275 11:08:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.275 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:54.275 ************************************ 00:08:54.275 START TEST alias_rpc 00:08:54.275 ************************************ 00:08:54.275 11:08:34 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.275 * Looking for test storage... 00:08:54.275 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:08:54.275 11:08:34 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.275 11:08:34 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.275 11:08:34 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.535 11:08:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.535 --rc genhtml_branch_coverage=1 00:08:54.535 --rc genhtml_function_coverage=1 00:08:54.535 --rc genhtml_legend=1 00:08:54.535 --rc geninfo_all_blocks=1 00:08:54.535 --rc geninfo_unexecuted_blocks=1 00:08:54.535 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:54.535 ' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.535 --rc genhtml_branch_coverage=1 00:08:54.535 --rc genhtml_function_coverage=1 00:08:54.535 --rc genhtml_legend=1 00:08:54.535 --rc geninfo_all_blocks=1 00:08:54.535 --rc geninfo_unexecuted_blocks=1 00:08:54.535 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:54.535 ' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.535 --rc genhtml_branch_coverage=1 00:08:54.535 --rc genhtml_function_coverage=1 00:08:54.535 --rc genhtml_legend=1 00:08:54.535 --rc geninfo_all_blocks=1 00:08:54.535 --rc geninfo_unexecuted_blocks=1 00:08:54.535 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:54.535 ' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.535 --rc genhtml_branch_coverage=1 00:08:54.535 --rc genhtml_function_coverage=1 00:08:54.535 --rc genhtml_legend=1 00:08:54.535 --rc geninfo_all_blocks=1 00:08:54.535 --rc geninfo_unexecuted_blocks=1 00:08:54.535 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:54.535 ' 00:08:54.535 11:08:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:54.535 11:08:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3705632 00:08:54.535 11:08:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3705632 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3705632 ']' 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.535 11:08:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.535 11:08:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:54.535 [2024-10-15 11:08:34.992341] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:54.535 [2024-10-15 11:08:34.992409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705632 ] 00:08:54.535 [2024-10-15 11:08:35.060397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.535 [2024-10-15 11:08:35.108684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.794 11:08:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.794 11:08:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:54.794 11:08:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:55.054 11:08:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3705632 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3705632 ']' 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3705632 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705632 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705632' 00:08:55.054 killing process with pid 3705632 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 3705632 00:08:55.054 11:08:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 3705632 00:08:55.312 00:08:55.312 real 0m1.100s 00:08:55.312 user 0m1.112s 00:08:55.312 sys 0m0.410s 00:08:55.312 11:08:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.312 11:08:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.312 ************************************ 00:08:55.312 END TEST alias_rpc 00:08:55.312 ************************************ 00:08:55.312 11:08:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:55.312 11:08:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.312 11:08:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.312 11:08:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.312 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:55.571 ************************************ 00:08:55.571 START TEST spdkcli_tcp 00:08:55.571 ************************************ 00:08:55.571 11:08:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.571 * Looking for test storage... 00:08:55.571 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:08:55.571 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.571 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.571 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.571 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.571 11:08:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.572 11:08:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.572 --rc genhtml_branch_coverage=1 00:08:55.572 --rc genhtml_function_coverage=1 00:08:55.572 --rc genhtml_legend=1 00:08:55.572 --rc geninfo_all_blocks=1 00:08:55.572 --rc geninfo_unexecuted_blocks=1 00:08:55.572 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:55.572 ' 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.572 --rc genhtml_branch_coverage=1 00:08:55.572 --rc genhtml_function_coverage=1 00:08:55.572 --rc genhtml_legend=1 00:08:55.572 --rc geninfo_all_blocks=1 00:08:55.572 --rc geninfo_unexecuted_blocks=1 00:08:55.572 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:55.572 ' 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.572 --rc genhtml_branch_coverage=1 00:08:55.572 --rc genhtml_function_coverage=1 00:08:55.572 --rc genhtml_legend=1 00:08:55.572 --rc geninfo_all_blocks=1 00:08:55.572 --rc geninfo_unexecuted_blocks=1 00:08:55.572 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:55.572 ' 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.572 --rc genhtml_branch_coverage=1 00:08:55.572 --rc genhtml_function_coverage=1 00:08:55.572 --rc genhtml_legend=1 00:08:55.572 --rc geninfo_all_blocks=1 00:08:55.572 --rc geninfo_unexecuted_blocks=1 00:08:55.572 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:55.572 ' 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3705865 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3705865 00:08:55.572 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3705865 ']' 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.572 11:08:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.572 [2024-10-15 11:08:36.161978] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:55.572 [2024-10-15 11:08:36.162071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705865 ] 00:08:55.831 [2024-10-15 11:08:36.230334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.831 [2024-10-15 11:08:36.276121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.831 [2024-10-15 11:08:36.276123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.090 11:08:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.090 11:08:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:56.090 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3705875 00:08:56.090 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:56.090 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:56.090 [ 00:08:56.090 "spdk_get_version", 00:08:56.090 "rpc_get_methods", 00:08:56.090 "notify_get_notifications", 00:08:56.090 "notify_get_types", 00:08:56.090 "trace_get_info", 00:08:56.090 "trace_get_tpoint_group_mask", 00:08:56.090 "trace_disable_tpoint_group", 00:08:56.090 "trace_enable_tpoint_group", 00:08:56.090 "trace_clear_tpoint_mask", 00:08:56.090 "trace_set_tpoint_mask", 00:08:56.090 "fsdev_set_opts", 00:08:56.090 "fsdev_get_opts", 00:08:56.090 "framework_get_pci_devices", 00:08:56.090 "framework_get_config", 00:08:56.090 "framework_get_subsystems", 00:08:56.090 "vfu_tgt_set_base_path", 00:08:56.090 "keyring_get_keys", 00:08:56.090 "iobuf_get_stats", 00:08:56.090 "iobuf_set_options", 00:08:56.090 "sock_get_default_impl", 00:08:56.090 "sock_set_default_impl", 00:08:56.090 "sock_impl_set_options", 00:08:56.090 "sock_impl_get_options", 00:08:56.090 "vmd_rescan", 00:08:56.090 "vmd_remove_device", 00:08:56.090 "vmd_enable", 00:08:56.090 "accel_get_stats", 00:08:56.090 "accel_set_options", 00:08:56.090 "accel_set_driver", 00:08:56.090 "accel_crypto_key_destroy", 00:08:56.090 "accel_crypto_keys_get", 00:08:56.090 "accel_crypto_key_create", 00:08:56.090 "accel_assign_opc", 00:08:56.090 "accel_get_module_info", 00:08:56.090 "accel_get_opc_assignments", 00:08:56.090 "bdev_get_histogram", 00:08:56.090 "bdev_enable_histogram", 00:08:56.090 "bdev_set_qos_limit", 00:08:56.090 "bdev_set_qd_sampling_period", 00:08:56.090 "bdev_get_bdevs", 00:08:56.090 "bdev_reset_iostat", 00:08:56.090 "bdev_get_iostat", 00:08:56.090 "bdev_examine", 00:08:56.090 "bdev_wait_for_examine", 00:08:56.090 "bdev_set_options", 00:08:56.090 "scsi_get_devices", 00:08:56.090 "thread_set_cpumask", 00:08:56.090 "scheduler_set_options", 00:08:56.090 "framework_get_governor", 00:08:56.090 "framework_get_scheduler", 00:08:56.090 "framework_set_scheduler", 00:08:56.091 "framework_get_reactors", 00:08:56.091 "thread_get_io_channels", 00:08:56.091 "thread_get_pollers", 00:08:56.091 "thread_get_stats", 00:08:56.091 "framework_monitor_context_switch", 00:08:56.091 "spdk_kill_instance", 00:08:56.091 "log_enable_timestamps", 00:08:56.091 "log_get_flags", 00:08:56.091 "log_clear_flag", 00:08:56.091 "log_set_flag", 00:08:56.091 "log_get_level", 00:08:56.091 "log_set_level", 00:08:56.091 "log_get_print_level", 00:08:56.091 "log_set_print_level", 00:08:56.091 "framework_enable_cpumask_locks", 00:08:56.091 "framework_disable_cpumask_locks", 00:08:56.091 "framework_wait_init", 00:08:56.091 "framework_start_init", 00:08:56.091 "virtio_blk_create_transport", 00:08:56.091 "virtio_blk_get_transports", 00:08:56.091 "vhost_controller_set_coalescing", 00:08:56.091 "vhost_get_controllers", 00:08:56.091 "vhost_delete_controller", 00:08:56.091 "vhost_create_blk_controller", 00:08:56.091 "vhost_scsi_controller_remove_target", 00:08:56.091 "vhost_scsi_controller_add_target", 00:08:56.091 "vhost_start_scsi_controller", 00:08:56.091 "vhost_create_scsi_controller", 00:08:56.091 "ublk_recover_disk", 00:08:56.091 "ublk_get_disks", 00:08:56.091 "ublk_stop_disk", 00:08:56.091 "ublk_start_disk", 00:08:56.091 "ublk_destroy_target", 00:08:56.091 "ublk_create_target", 00:08:56.091 "nbd_get_disks", 00:08:56.091 "nbd_stop_disk", 00:08:56.091 "nbd_start_disk", 00:08:56.091 "env_dpdk_get_mem_stats", 00:08:56.091 "nvmf_stop_mdns_prr", 00:08:56.091 "nvmf_publish_mdns_prr", 00:08:56.091 "nvmf_subsystem_get_listeners", 00:08:56.091 "nvmf_subsystem_get_qpairs", 00:08:56.091 "nvmf_subsystem_get_controllers", 00:08:56.091 "nvmf_get_stats", 00:08:56.091 "nvmf_get_transports", 00:08:56.091 "nvmf_create_transport", 00:08:56.091 "nvmf_get_targets", 00:08:56.091 "nvmf_delete_target", 00:08:56.091 "nvmf_create_target", 00:08:56.091 "nvmf_subsystem_allow_any_host", 00:08:56.091 "nvmf_subsystem_set_keys", 00:08:56.091 "nvmf_subsystem_remove_host", 00:08:56.091 "nvmf_subsystem_add_host", 00:08:56.091 "nvmf_ns_remove_host", 00:08:56.091 "nvmf_ns_add_host", 00:08:56.091 "nvmf_subsystem_remove_ns", 00:08:56.091 "nvmf_subsystem_set_ns_ana_group", 00:08:56.091 "nvmf_subsystem_add_ns", 00:08:56.091 "nvmf_subsystem_listener_set_ana_state", 00:08:56.091 "nvmf_discovery_get_referrals", 00:08:56.091 "nvmf_discovery_remove_referral", 00:08:56.091 "nvmf_discovery_add_referral", 00:08:56.091 "nvmf_subsystem_remove_listener", 00:08:56.091 "nvmf_subsystem_add_listener", 00:08:56.091 "nvmf_delete_subsystem", 00:08:56.091 "nvmf_create_subsystem", 00:08:56.091 "nvmf_get_subsystems", 00:08:56.091 "nvmf_set_crdt", 00:08:56.091 "nvmf_set_config", 00:08:56.091 "nvmf_set_max_subsystems", 00:08:56.091 "iscsi_get_histogram", 00:08:56.091 "iscsi_enable_histogram", 00:08:56.091 "iscsi_set_options", 00:08:56.091 "iscsi_get_auth_groups", 00:08:56.091 "iscsi_auth_group_remove_secret", 00:08:56.091 "iscsi_auth_group_add_secret", 00:08:56.091 "iscsi_delete_auth_group", 00:08:56.091 "iscsi_create_auth_group", 00:08:56.091 "iscsi_set_discovery_auth", 00:08:56.091 "iscsi_get_options", 00:08:56.091 "iscsi_target_node_request_logout", 00:08:56.091 "iscsi_target_node_set_redirect", 00:08:56.091 "iscsi_target_node_set_auth", 00:08:56.091 "iscsi_target_node_add_lun", 00:08:56.091 "iscsi_get_stats", 00:08:56.091 "iscsi_get_connections", 00:08:56.091 "iscsi_portal_group_set_auth", 00:08:56.091 "iscsi_start_portal_group", 00:08:56.091 "iscsi_delete_portal_group", 00:08:56.091 "iscsi_create_portal_group", 00:08:56.091 "iscsi_get_portal_groups", 00:08:56.091 "iscsi_delete_target_node", 00:08:56.091 "iscsi_target_node_remove_pg_ig_maps", 00:08:56.091 "iscsi_target_node_add_pg_ig_maps", 00:08:56.091 "iscsi_create_target_node", 00:08:56.091 "iscsi_get_target_nodes", 00:08:56.091 "iscsi_delete_initiator_group", 00:08:56.091 "iscsi_initiator_group_remove_initiators", 00:08:56.091 "iscsi_initiator_group_add_initiators", 00:08:56.091 "iscsi_create_initiator_group", 00:08:56.091 "iscsi_get_initiator_groups", 00:08:56.091 "fsdev_aio_delete", 00:08:56.091 "fsdev_aio_create", 00:08:56.091 "keyring_linux_set_options", 00:08:56.091 "keyring_file_remove_key", 00:08:56.091 "keyring_file_add_key", 00:08:56.091 "vfu_virtio_create_fs_endpoint", 00:08:56.091 "vfu_virtio_create_scsi_endpoint", 00:08:56.091 "vfu_virtio_scsi_remove_target", 00:08:56.091 "vfu_virtio_scsi_add_target", 00:08:56.091 "vfu_virtio_create_blk_endpoint", 00:08:56.091 "vfu_virtio_delete_endpoint", 00:08:56.091 "iaa_scan_accel_module", 00:08:56.091 "dsa_scan_accel_module", 00:08:56.091 "ioat_scan_accel_module", 00:08:56.091 "accel_error_inject_error", 00:08:56.091 "bdev_iscsi_delete", 00:08:56.091 "bdev_iscsi_create", 00:08:56.091 "bdev_iscsi_set_options", 00:08:56.091 "bdev_virtio_attach_controller", 00:08:56.091 "bdev_virtio_scsi_get_devices", 00:08:56.091 "bdev_virtio_detach_controller", 00:08:56.091 "bdev_virtio_blk_set_hotplug", 00:08:56.091 "bdev_ftl_set_property", 00:08:56.091 "bdev_ftl_get_properties", 00:08:56.091 "bdev_ftl_get_stats", 00:08:56.091 "bdev_ftl_unmap", 00:08:56.091 "bdev_ftl_unload", 00:08:56.091 "bdev_ftl_delete", 00:08:56.091 "bdev_ftl_load", 00:08:56.091 "bdev_ftl_create", 00:08:56.091 "bdev_aio_delete", 00:08:56.091 "bdev_aio_rescan", 00:08:56.091 "bdev_aio_create", 00:08:56.091 "blobfs_create", 00:08:56.091 "blobfs_detect", 00:08:56.091 "blobfs_set_cache_size", 00:08:56.091 "bdev_zone_block_delete", 00:08:56.091 "bdev_zone_block_create", 00:08:56.091 "bdev_delay_delete", 00:08:56.091 "bdev_delay_create", 00:08:56.091 "bdev_delay_update_latency", 00:08:56.091 "bdev_split_delete", 00:08:56.091 "bdev_split_create", 00:08:56.091 "bdev_error_inject_error", 00:08:56.091 "bdev_error_delete", 00:08:56.091 "bdev_error_create", 00:08:56.091 "bdev_raid_set_options", 00:08:56.091 "bdev_raid_remove_base_bdev", 00:08:56.091 "bdev_raid_add_base_bdev", 00:08:56.091 "bdev_raid_delete", 00:08:56.091 "bdev_raid_create", 00:08:56.091 "bdev_raid_get_bdevs", 00:08:56.091 "bdev_lvol_set_parent_bdev", 00:08:56.091 "bdev_lvol_set_parent", 00:08:56.091 "bdev_lvol_check_shallow_copy", 00:08:56.091 "bdev_lvol_start_shallow_copy", 00:08:56.091 "bdev_lvol_grow_lvstore", 00:08:56.091 "bdev_lvol_get_lvols", 00:08:56.091 "bdev_lvol_get_lvstores", 00:08:56.091 "bdev_lvol_delete", 00:08:56.091 "bdev_lvol_set_read_only", 00:08:56.091 "bdev_lvol_resize", 00:08:56.091 "bdev_lvol_decouple_parent", 00:08:56.091 "bdev_lvol_inflate", 00:08:56.091 "bdev_lvol_rename", 00:08:56.091 "bdev_lvol_clone_bdev", 00:08:56.091 "bdev_lvol_clone", 00:08:56.091 "bdev_lvol_snapshot", 00:08:56.091 "bdev_lvol_create", 00:08:56.091 "bdev_lvol_delete_lvstore", 00:08:56.091 "bdev_lvol_rename_lvstore", 00:08:56.091 "bdev_lvol_create_lvstore", 00:08:56.091 "bdev_passthru_delete", 00:08:56.091 "bdev_passthru_create", 00:08:56.091 "bdev_nvme_cuse_unregister", 00:08:56.091 "bdev_nvme_cuse_register", 00:08:56.091 "bdev_opal_new_user", 00:08:56.091 "bdev_opal_set_lock_state", 00:08:56.091 "bdev_opal_delete", 00:08:56.091 "bdev_opal_get_info", 00:08:56.091 "bdev_opal_create", 00:08:56.091 "bdev_nvme_opal_revert", 00:08:56.091 "bdev_nvme_opal_init", 00:08:56.091 "bdev_nvme_send_cmd", 00:08:56.091 "bdev_nvme_set_keys", 00:08:56.091 "bdev_nvme_get_path_iostat", 00:08:56.091 "bdev_nvme_get_mdns_discovery_info", 00:08:56.091 "bdev_nvme_stop_mdns_discovery", 00:08:56.091 "bdev_nvme_start_mdns_discovery", 00:08:56.091 "bdev_nvme_set_multipath_policy", 00:08:56.091 "bdev_nvme_set_preferred_path", 00:08:56.091 "bdev_nvme_get_io_paths", 00:08:56.091 "bdev_nvme_remove_error_injection", 00:08:56.091 "bdev_nvme_add_error_injection", 00:08:56.091 "bdev_nvme_get_discovery_info", 00:08:56.091 "bdev_nvme_stop_discovery", 00:08:56.091 "bdev_nvme_start_discovery", 00:08:56.091 "bdev_nvme_get_controller_health_info", 00:08:56.091 "bdev_nvme_disable_controller", 00:08:56.091 "bdev_nvme_enable_controller", 00:08:56.091 "bdev_nvme_reset_controller", 00:08:56.091 "bdev_nvme_get_transport_statistics", 00:08:56.091 "bdev_nvme_apply_firmware", 00:08:56.091 "bdev_nvme_detach_controller", 00:08:56.091 "bdev_nvme_get_controllers", 00:08:56.091 "bdev_nvme_attach_controller", 00:08:56.091 "bdev_nvme_set_hotplug", 00:08:56.091 "bdev_nvme_set_options", 00:08:56.091 "bdev_null_resize", 00:08:56.091 "bdev_null_delete", 00:08:56.091 "bdev_null_create", 00:08:56.091 "bdev_malloc_delete", 00:08:56.091 "bdev_malloc_create" 00:08:56.091 ] 00:08:56.091 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:56.091 11:08:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.091 11:08:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.091 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:56.091 11:08:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3705865 00:08:56.091 11:08:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3705865 ']' 00:08:56.091 11:08:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3705865 00:08:56.091 11:08:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:56.350 11:08:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.350 11:08:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705865 00:08:56.350 11:08:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.350 11:08:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.351 11:08:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705865' 00:08:56.351 killing process with pid 3705865 00:08:56.351 11:08:36 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3705865 00:08:56.351 11:08:36 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3705865 00:08:56.610 00:08:56.610 real 0m1.117s 00:08:56.610 user 0m1.849s 00:08:56.610 sys 0m0.493s 00:08:56.610 11:08:37 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.610 11:08:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 ************************************ 00:08:56.610 END TEST spdkcli_tcp 00:08:56.610 ************************************ 00:08:56.610 11:08:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:56.610 11:08:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.610 11:08:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.610 11:08:37 -- common/autotest_common.sh@10 -- # set +x 00:08:56.610 ************************************ 00:08:56.610 START TEST dpdk_mem_utility 00:08:56.610 ************************************ 00:08:56.610 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:56.869 * Looking for test storage... 00:08:56.869 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:08:56.869 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:56.869 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:56.869 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:56.869 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:56.869 11:08:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.870 11:08:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.870 --rc genhtml_branch_coverage=1 00:08:56.870 --rc genhtml_function_coverage=1 00:08:56.870 --rc genhtml_legend=1 00:08:56.870 --rc geninfo_all_blocks=1 00:08:56.870 --rc geninfo_unexecuted_blocks=1 00:08:56.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:56.870 ' 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.870 --rc genhtml_branch_coverage=1 00:08:56.870 --rc genhtml_function_coverage=1 00:08:56.870 --rc genhtml_legend=1 00:08:56.870 --rc geninfo_all_blocks=1 00:08:56.870 --rc geninfo_unexecuted_blocks=1 00:08:56.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:56.870 ' 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.870 --rc genhtml_branch_coverage=1 00:08:56.870 --rc genhtml_function_coverage=1 00:08:56.870 --rc genhtml_legend=1 00:08:56.870 --rc geninfo_all_blocks=1 00:08:56.870 --rc geninfo_unexecuted_blocks=1 00:08:56.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:56.870 ' 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.870 --rc genhtml_branch_coverage=1 00:08:56.870 --rc genhtml_function_coverage=1 00:08:56.870 --rc genhtml_legend=1 00:08:56.870 --rc geninfo_all_blocks=1 00:08:56.870 --rc geninfo_unexecuted_blocks=1 00:08:56.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:56.870 ' 00:08:56.870 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:56.870 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.870 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3706119 00:08:56.870 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3706119 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3706119 ']' 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.870 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:56.870 [2024-10-15 11:08:37.359233] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:56.870 [2024-10-15 11:08:37.359322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706119 ] 00:08:56.870 [2024-10-15 11:08:37.426513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.870 [2024-10-15 11:08:37.474355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.130 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.130 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:57.130 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:57.130 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:57.130 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.130 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:57.130 { 00:08:57.130 "filename": "/tmp/spdk_mem_dump.txt" 00:08:57.130 } 00:08:57.130 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.130 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:57.130 DPDK memory size 810.000000 MiB in 1 heap(s) 00:08:57.130 1 heaps totaling size 810.000000 MiB 00:08:57.130 size: 810.000000 MiB heap id: 0 00:08:57.130 end heaps---------- 00:08:57.130 9 mempools totaling size 595.772034 MiB 00:08:57.130 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:57.130 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:57.130 size: 92.545471 MiB name: bdev_io_3706119 00:08:57.130 size: 50.003479 MiB name: msgpool_3706119 00:08:57.130 size: 36.509338 MiB name: fsdev_io_3706119 00:08:57.130 size: 21.763794 MiB name: PDU_Pool 00:08:57.130 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:57.130 size: 4.133484 MiB name: evtpool_3706119 00:08:57.130 size: 0.026123 MiB name: Session_Pool 00:08:57.130 end mempools------- 00:08:57.130 6 memzones totaling size 4.142822 MiB 00:08:57.130 size: 1.000366 MiB name: RG_ring_0_3706119 00:08:57.130 size: 1.000366 MiB name: RG_ring_1_3706119 00:08:57.130 size: 1.000366 MiB name: RG_ring_4_3706119 00:08:57.130 size: 1.000366 MiB name: RG_ring_5_3706119 00:08:57.130 size: 0.125366 MiB name: RG_ring_2_3706119 00:08:57.130 size: 0.015991 MiB name: RG_ring_3_3706119 00:08:57.130 end memzones------- 00:08:57.130 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:57.391 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:57.391 list of free elements. size: 10.862488 MiB 00:08:57.391 element at address: 0x200018a00000 with size: 0.999878 MiB 00:08:57.391 element at address: 0x200018c00000 with size: 0.999878 MiB 00:08:57.391 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:57.391 element at address: 0x200031800000 with size: 0.994446 MiB 00:08:57.391 element at address: 0x200008000000 with size: 0.959839 MiB 00:08:57.391 element at address: 0x200012c00000 with size: 0.954285 MiB 00:08:57.391 element at address: 0x200018e00000 with size: 0.936584 MiB 00:08:57.391 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:57.391 element at address: 0x20001a600000 with size: 0.582886 MiB 00:08:57.391 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:57.391 element at address: 0x200003e00000 with size: 0.490723 MiB 00:08:57.391 element at address: 0x200019000000 with size: 0.485657 MiB 00:08:57.391 element at address: 0x200010600000 with size: 0.481934 MiB 00:08:57.391 element at address: 0x200027a00000 with size: 0.410034 MiB 00:08:57.391 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:57.391 list of standard malloc elements. size: 199.218628 MiB 00:08:57.391 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:08:57.391 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:08:57.391 element at address: 0x200018afff80 with size: 1.000122 MiB 00:08:57.391 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:08:57.391 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:57.391 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:57.391 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:08:57.391 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:57.391 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:08:57.391 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20000085b100 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000008df880 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20001067b600 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:08:57.391 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20001a695380 with size: 0.000183 MiB 00:08:57.391 element at address: 0x20001a695440 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200027a69040 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:08:57.391 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:08:57.391 list of memzone associated elements. size: 599.918884 MiB 00:08:57.391 element at address: 0x20001a695500 with size: 211.416748 MiB 00:08:57.391 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:57.391 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:08:57.391 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:57.391 element at address: 0x200012df4780 with size: 92.045044 MiB 00:08:57.391 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3706119_0 00:08:57.391 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:57.391 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3706119_0 00:08:57.391 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:08:57.391 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3706119_0 00:08:57.391 element at address: 0x2000191be940 with size: 20.255554 MiB 00:08:57.391 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:57.391 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:08:57.391 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:57.391 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:57.391 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3706119_0 00:08:57.391 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:57.391 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3706119 00:08:57.391 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:57.391 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3706119 00:08:57.391 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:08:57.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:57.391 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:08:57.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:57.391 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:08:57.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:57.391 element at address: 0x200003efde40 with size: 1.008118 MiB 00:08:57.391 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:57.391 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:57.391 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3706119 00:08:57.391 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:57.391 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3706119 00:08:57.391 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:08:57.391 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3706119 00:08:57.391 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:08:57.391 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3706119 00:08:57.391 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:08:57.391 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3706119 00:08:57.391 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:57.391 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3706119 00:08:57.391 element at address: 0x20001067b780 with size: 0.500488 MiB 00:08:57.391 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:57.391 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:08:57.391 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:57.391 element at address: 0x20001907c540 with size: 0.250488 MiB 00:08:57.391 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:57.391 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:57.391 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3706119 00:08:57.391 element at address: 0x2000008df940 with size: 0.125488 MiB 00:08:57.391 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3706119 00:08:57.391 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:08:57.391 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:57.391 element at address: 0x200027a69100 with size: 0.023743 MiB 00:08:57.391 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:57.391 element at address: 0x2000008db680 with size: 0.016113 MiB 00:08:57.391 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3706119 00:08:57.391 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:08:57.391 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:57.392 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:57.392 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3706119 00:08:57.392 element at address: 0x2000008db480 with size: 0.000305 MiB 00:08:57.392 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3706119 00:08:57.392 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:57.392 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3706119 00:08:57.392 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:08:57.392 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:57.392 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:57.392 11:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3706119 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3706119 ']' 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3706119 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3706119 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3706119' 00:08:57.392 killing process with pid 3706119 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3706119 00:08:57.392 11:08:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3706119 00:08:57.651 00:08:57.651 real 0m0.976s 00:08:57.651 user 0m0.861s 00:08:57.651 sys 0m0.431s 00:08:57.651 11:08:38 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.651 11:08:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:57.651 ************************************ 00:08:57.651 END TEST dpdk_mem_utility 00:08:57.651 ************************************ 00:08:57.651 11:08:38 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:08:57.651 11:08:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.651 11:08:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.651 11:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:57.651 ************************************ 00:08:57.651 START TEST event 00:08:57.651 ************************************ 00:08:57.651 11:08:38 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:08:57.910 * Looking for test storage... 00:08:57.910 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:57.910 11:08:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.910 11:08:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.910 11:08:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.910 11:08:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.910 11:08:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.910 11:08:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.910 11:08:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.910 11:08:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.910 11:08:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.910 11:08:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.910 11:08:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.910 11:08:38 event -- scripts/common.sh@344 -- # case "$op" in 00:08:57.910 11:08:38 event -- scripts/common.sh@345 -- # : 1 00:08:57.910 11:08:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.910 11:08:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.910 11:08:38 event -- scripts/common.sh@365 -- # decimal 1 00:08:57.910 11:08:38 event -- scripts/common.sh@353 -- # local d=1 00:08:57.910 11:08:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.910 11:08:38 event -- scripts/common.sh@355 -- # echo 1 00:08:57.910 11:08:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.910 11:08:38 event -- scripts/common.sh@366 -- # decimal 2 00:08:57.910 11:08:38 event -- scripts/common.sh@353 -- # local d=2 00:08:57.910 11:08:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.910 11:08:38 event -- scripts/common.sh@355 -- # echo 2 00:08:57.910 11:08:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.910 11:08:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.910 11:08:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.910 11:08:38 event -- scripts/common.sh@368 -- # return 0 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.910 --rc genhtml_branch_coverage=1 00:08:57.910 --rc genhtml_function_coverage=1 00:08:57.910 --rc genhtml_legend=1 00:08:57.910 --rc geninfo_all_blocks=1 00:08:57.910 --rc geninfo_unexecuted_blocks=1 00:08:57.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:57.910 ' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.910 --rc genhtml_branch_coverage=1 00:08:57.910 --rc genhtml_function_coverage=1 00:08:57.910 --rc genhtml_legend=1 00:08:57.910 --rc geninfo_all_blocks=1 00:08:57.910 --rc geninfo_unexecuted_blocks=1 00:08:57.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:57.910 ' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.910 --rc genhtml_branch_coverage=1 00:08:57.910 --rc genhtml_function_coverage=1 00:08:57.910 --rc genhtml_legend=1 00:08:57.910 --rc geninfo_all_blocks=1 00:08:57.910 --rc geninfo_unexecuted_blocks=1 00:08:57.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:57.910 ' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.910 --rc genhtml_branch_coverage=1 00:08:57.910 --rc genhtml_function_coverage=1 00:08:57.910 --rc genhtml_legend=1 00:08:57.910 --rc geninfo_all_blocks=1 00:08:57.910 --rc geninfo_unexecuted_blocks=1 00:08:57.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:57.910 ' 00:08:57.910 11:08:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:57.910 11:08:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:57.910 11:08:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:57.910 11:08:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.910 11:08:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.910 ************************************ 00:08:57.910 START TEST event_perf 00:08:57.910 ************************************ 00:08:57.910 11:08:38 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:57.910 Running I/O for 1 seconds...[2024-10-15 11:08:38.452243] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:57.910 [2024-10-15 11:08:38.452325] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706357 ] 00:08:57.910 [2024-10-15 11:08:38.524459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.168 [2024-10-15 11:08:38.573558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.168 [2024-10-15 11:08:38.573644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.168 [2024-10-15 11:08:38.573721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.168 [2024-10-15 11:08:38.573723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.102 Running I/O for 1 seconds... 00:08:59.102 lcore 0: 194273 00:08:59.102 lcore 1: 194269 00:08:59.102 lcore 2: 194270 00:08:59.102 lcore 3: 194272 00:08:59.102 done. 00:08:59.102 00:08:59.102 real 0m1.183s 00:08:59.102 user 0m4.099s 00:08:59.102 sys 0m0.081s 00:08:59.102 11:08:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.102 11:08:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.102 ************************************ 00:08:59.102 END TEST event_perf 00:08:59.102 ************************************ 00:08:59.102 11:08:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:59.102 11:08:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:59.102 11:08:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.102 11:08:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.102 ************************************ 00:08:59.102 START TEST event_reactor 00:08:59.102 ************************************ 00:08:59.102 11:08:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:59.102 [2024-10-15 11:08:39.716679] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:08:59.102 [2024-10-15 11:08:39.716762] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706552 ] 00:08:59.361 [2024-10-15 11:08:39.788546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.361 [2024-10-15 11:08:39.832961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.297 test_start 00:09:00.297 oneshot 00:09:00.297 tick 100 00:09:00.297 tick 100 00:09:00.297 tick 250 00:09:00.297 tick 100 00:09:00.297 tick 100 00:09:00.297 tick 100 00:09:00.297 tick 250 00:09:00.297 tick 500 00:09:00.297 tick 100 00:09:00.297 tick 100 00:09:00.297 tick 250 00:09:00.297 tick 100 00:09:00.297 tick 100 00:09:00.297 test_end 00:09:00.297 00:09:00.297 real 0m1.175s 00:09:00.297 user 0m1.091s 00:09:00.297 sys 0m0.080s 00:09:00.297 11:08:40 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.297 11:08:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 ************************************ 00:09:00.297 END TEST event_reactor 00:09:00.297 ************************************ 00:09:00.297 11:08:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:00.297 11:08:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:00.297 11:08:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.297 11:08:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:00.555 ************************************ 00:09:00.556 START TEST event_reactor_perf 00:09:00.556 ************************************ 00:09:00.556 11:08:40 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:00.556 [2024-10-15 11:08:40.968335] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:00.556 [2024-10-15 11:08:40.968397] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706738 ] 00:09:00.556 [2024-10-15 11:08:41.033357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.556 [2024-10-15 11:08:41.077759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.491 test_start 00:09:01.491 test_end 00:09:01.491 Performance: 949836 events per second 00:09:01.491 00:09:01.491 real 0m1.159s 00:09:01.491 user 0m1.084s 00:09:01.491 sys 0m0.072s 00:09:01.491 11:08:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.491 11:08:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:01.491 ************************************ 00:09:01.491 END TEST event_reactor_perf 00:09:01.491 ************************************ 00:09:01.750 11:08:42 event -- event/event.sh@49 -- # uname -s 00:09:01.750 11:08:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:01.750 11:08:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:01.750 11:08:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.750 11:08:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.750 11:08:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:01.750 ************************************ 00:09:01.750 START TEST event_scheduler 00:09:01.750 ************************************ 00:09:01.750 11:08:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:01.750 * Looking for test storage... 00:09:01.750 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:09:01.750 11:08:42 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.750 11:08:42 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.750 11:08:42 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.750 11:08:42 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.750 11:08:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.751 11:08:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.010 11:08:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.010 --rc genhtml_branch_coverage=1 00:09:02.010 --rc genhtml_function_coverage=1 00:09:02.010 --rc genhtml_legend=1 00:09:02.010 --rc geninfo_all_blocks=1 00:09:02.010 --rc geninfo_unexecuted_blocks=1 00:09:02.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:02.010 ' 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.010 --rc genhtml_branch_coverage=1 00:09:02.010 --rc genhtml_function_coverage=1 00:09:02.010 --rc genhtml_legend=1 00:09:02.010 --rc geninfo_all_blocks=1 00:09:02.010 --rc geninfo_unexecuted_blocks=1 00:09:02.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:02.010 ' 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.010 --rc genhtml_branch_coverage=1 00:09:02.010 --rc genhtml_function_coverage=1 00:09:02.010 --rc genhtml_legend=1 00:09:02.010 --rc geninfo_all_blocks=1 00:09:02.010 --rc geninfo_unexecuted_blocks=1 00:09:02.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:02.010 ' 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.010 --rc genhtml_branch_coverage=1 00:09:02.010 --rc genhtml_function_coverage=1 00:09:02.010 --rc genhtml_legend=1 00:09:02.010 --rc geninfo_all_blocks=1 00:09:02.010 --rc geninfo_unexecuted_blocks=1 00:09:02.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:02.010 ' 00:09:02.010 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:02.010 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3706975 00:09:02.010 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:02.010 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.010 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3706975 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3706975 ']' 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.010 11:08:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.010 [2024-10-15 11:08:42.417405] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:02.011 [2024-10-15 11:08:42.417495] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706975 ] 00:09:02.011 [2024-10-15 11:08:42.482270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.011 [2024-10-15 11:08:42.528931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.011 [2024-10-15 11:08:42.529009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.011 [2024-10-15 11:08:42.529097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.011 [2024-10-15 11:08:42.529099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:02.011 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.011 [2024-10-15 11:08:42.605815] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:02.011 [2024-10-15 11:08:42.605836] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:02.011 [2024-10-15 11:08:42.605848] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:02.011 [2024-10-15 11:08:42.605857] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:02.011 [2024-10-15 11:08:42.605864] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.011 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.011 11:08:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 [2024-10-15 11:08:42.679684] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:02.270 11:08:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.270 11:08:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:02.270 11:08:42 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.270 11:08:42 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.270 11:08:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 ************************************ 00:09:02.270 START TEST scheduler_create_thread 00:09:02.270 ************************************ 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 2 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 3 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 4 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 5 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:02.270 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 6 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 7 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 8 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 9 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 10 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 11:08:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.649 11:08:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.907 11:08:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:03.908 11:08:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:03.908 11:08:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.908 11:08:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.844 11:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.844 00:09:04.844 real 0m2.619s 00:09:04.844 user 0m0.022s 00:09:04.844 sys 0m0.009s 00:09:04.844 11:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.844 11:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.844 ************************************ 00:09:04.844 END TEST scheduler_create_thread 00:09:04.844 ************************************ 00:09:04.844 11:08:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:04.844 11:08:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3706975 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3706975 ']' 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3706975 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3706975 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3706975' 00:09:04.844 killing process with pid 3706975 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3706975 00:09:04.844 11:08:45 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3706975 00:09:05.410 [2024-10-15 11:08:45.821990] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:05.410 00:09:05.410 real 0m3.787s 00:09:05.410 user 0m5.707s 00:09:05.410 sys 0m0.431s 00:09:05.410 11:08:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.410 11:08:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:05.410 ************************************ 00:09:05.410 END TEST event_scheduler 00:09:05.410 ************************************ 00:09:05.410 11:08:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:05.410 11:08:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:05.410 11:08:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.410 11:08:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.410 11:08:46 event -- common/autotest_common.sh@10 -- # set +x 00:09:05.669 ************************************ 00:09:05.669 START TEST app_repeat 00:09:05.669 ************************************ 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3707447 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3707447' 00:09:05.669 Process app_repeat pid: 3707447 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:05.669 spdk_app_start Round 0 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3707447 /var/tmp/spdk-nbd.sock 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3707447 ']' 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:05.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.669 [2024-10-15 11:08:46.088262] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:05.669 [2024-10-15 11:08:46.088353] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707447 ] 00:09:05.669 [2024-10-15 11:08:46.158189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.669 [2024-10-15 11:08:46.207787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.669 [2024-10-15 11:08:46.207791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.669 11:08:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:05.669 11:08:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.927 Malloc0 00:09:05.927 11:08:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:06.185 Malloc1 00:09:06.185 11:08:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.185 11:08:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:06.444 /dev/nbd0 00:09:06.444 11:08:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:06.444 11:08:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:06.444 1+0 records in 00:09:06.444 1+0 records out 00:09:06.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252836 s, 16.2 MB/s 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:06.444 11:08:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:06.444 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.444 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.444 11:08:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:06.703 /dev/nbd1 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:06.703 1+0 records in 00:09:06.703 1+0 records out 00:09:06.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274961 s, 14.9 MB/s 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:06.703 11:08:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.703 11:08:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.962 { 00:09:06.962 "nbd_device": "/dev/nbd0", 00:09:06.962 "bdev_name": "Malloc0" 00:09:06.962 }, 00:09:06.962 { 00:09:06.962 "nbd_device": "/dev/nbd1", 00:09:06.962 "bdev_name": "Malloc1" 00:09:06.962 } 00:09:06.962 ]' 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.962 { 00:09:06.962 "nbd_device": "/dev/nbd0", 00:09:06.962 "bdev_name": "Malloc0" 00:09:06.962 }, 00:09:06.962 { 00:09:06.962 "nbd_device": "/dev/nbd1", 00:09:06.962 "bdev_name": "Malloc1" 00:09:06.962 } 00:09:06.962 ]' 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.962 /dev/nbd1' 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.962 /dev/nbd1' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:06.963 256+0 records in 00:09:06.963 256+0 records out 00:09:06.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116148 s, 90.3 MB/s 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.963 256+0 records in 00:09:06.963 256+0 records out 00:09:06.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021335 s, 49.1 MB/s 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.963 256+0 records in 00:09:06.963 256+0 records out 00:09:06.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220461 s, 47.6 MB/s 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.222 11:08:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.480 11:08:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.480 11:08:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.480 11:08:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.480 11:08:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.480 11:08:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.480 11:08:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.738 11:08:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.738 11:08:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:08.010 11:08:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:08.010 [2024-10-15 11:08:48.601577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.270 [2024-10-15 11:08:48.648264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.270 [2024-10-15 11:08:48.648267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.270 [2024-10-15 11:08:48.688422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:08.270 [2024-10-15 11:08:48.688466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:11.558 11:08:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:11.558 11:08:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:11.558 spdk_app_start Round 1 00:09:11.558 11:08:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3707447 /var/tmp/spdk-nbd.sock 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3707447 ']' 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:11.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.558 11:08:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:11.558 11:08:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.558 Malloc0 00:09:11.558 11:08:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.558 Malloc1 00:09:11.558 11:08:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.558 11:08:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:11.817 /dev/nbd0 00:09:11.817 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.817 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.817 1+0 records in 00:09:11.817 1+0 records out 00:09:11.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242706 s, 16.9 MB/s 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.817 11:08:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:11.817 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.817 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.817 11:08:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:12.076 /dev/nbd1 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.076 1+0 records in 00:09:12.076 1+0 records out 00:09:12.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245688 s, 16.7 MB/s 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:12.076 11:08:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.076 11:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.336 { 00:09:12.336 "nbd_device": "/dev/nbd0", 00:09:12.336 "bdev_name": "Malloc0" 00:09:12.336 }, 00:09:12.336 { 00:09:12.336 "nbd_device": "/dev/nbd1", 00:09:12.336 "bdev_name": "Malloc1" 00:09:12.336 } 00:09:12.336 ]' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.336 { 00:09:12.336 "nbd_device": "/dev/nbd0", 00:09:12.336 "bdev_name": "Malloc0" 00:09:12.336 }, 00:09:12.336 { 00:09:12.336 "nbd_device": "/dev/nbd1", 00:09:12.336 "bdev_name": "Malloc1" 00:09:12.336 } 00:09:12.336 ]' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.336 /dev/nbd1' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.336 /dev/nbd1' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:12.336 256+0 records in 00:09:12.336 256+0 records out 00:09:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462252 s, 227 MB/s 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.336 256+0 records in 00:09:12.336 256+0 records out 00:09:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203674 s, 51.5 MB/s 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:12.336 256+0 records in 00:09:12.336 256+0 records out 00:09:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216002 s, 48.5 MB/s 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.336 11:08:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:12.594 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:12.594 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:12.594 11:08:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:12.594 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.594 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.595 11:08:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:12.595 11:08:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.595 11:08:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.595 11:08:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.595 11:08:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.852 11:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:13.112 11:08:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:13.112 11:08:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:13.371 11:08:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:13.371 [2024-10-15 11:08:53.908797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.371 [2024-10-15 11:08:53.951577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.371 [2024-10-15 11:08:53.951580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.371 [2024-10-15 11:08:53.993053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:13.371 [2024-10-15 11:08:53.993097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:16.661 11:08:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:16.661 11:08:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:16.661 spdk_app_start Round 2 00:09:16.661 11:08:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3707447 /var/tmp/spdk-nbd.sock 00:09:16.661 11:08:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3707447 ']' 00:09:16.661 11:08:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:16.661 11:08:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.661 11:08:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:16.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:16.662 11:08:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.662 11:08:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:16.662 11:08:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.662 11:08:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:16.662 11:08:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.662 Malloc0 00:09:16.662 11:08:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.921 Malloc1 00:09:16.921 11:08:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.921 11:08:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:17.180 /dev/nbd0 00:09:17.180 11:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:17.180 11:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.180 1+0 records in 00:09:17.180 1+0 records out 00:09:17.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249758 s, 16.4 MB/s 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:17.180 11:08:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:17.181 11:08:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:17.181 11:08:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.181 11:08:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:17.181 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.181 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.181 11:08:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:17.181 /dev/nbd1 00:09:17.439 11:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.439 11:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.439 11:08:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:17.439 11:08:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:17.439 11:08:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.440 1+0 records in 00:09:17.440 1+0 records out 00:09:17.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018149 s, 22.6 MB/s 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.440 11:08:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:17.440 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.440 11:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.440 11:08:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.440 11:08:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.440 11:08:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.440 11:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:17.440 { 00:09:17.440 "nbd_device": "/dev/nbd0", 00:09:17.440 "bdev_name": "Malloc0" 00:09:17.440 }, 00:09:17.440 { 00:09:17.440 "nbd_device": "/dev/nbd1", 00:09:17.440 "bdev_name": "Malloc1" 00:09:17.440 } 00:09:17.440 ]' 00:09:17.440 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:17.440 { 00:09:17.440 "nbd_device": "/dev/nbd0", 00:09:17.440 "bdev_name": "Malloc0" 00:09:17.440 }, 00:09:17.440 { 00:09:17.440 "nbd_device": "/dev/nbd1", 00:09:17.440 "bdev_name": "Malloc1" 00:09:17.440 } 00:09:17.440 ]' 00:09:17.440 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:17.699 /dev/nbd1' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:17.699 /dev/nbd1' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:17.699 256+0 records in 00:09:17.699 256+0 records out 00:09:17.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107507 s, 97.5 MB/s 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:17.699 256+0 records in 00:09:17.699 256+0 records out 00:09:17.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198846 s, 52.7 MB/s 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:17.699 256+0 records in 00:09:17.699 256+0 records out 00:09:17.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216882 s, 48.3 MB/s 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.699 11:08:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.958 11:08:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:18.217 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:18.477 11:08:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:18.477 11:08:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:18.477 11:08:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:18.736 [2024-10-15 11:08:59.237107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.736 [2024-10-15 11:08:59.282859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.736 [2024-10-15 11:08:59.282861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.736 [2024-10-15 11:08:59.323506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:18.736 [2024-10-15 11:08:59.323551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:22.025 11:09:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3707447 /var/tmp/spdk-nbd.sock 00:09:22.025 11:09:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3707447 ']' 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:22.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:22.026 11:09:02 event.app_repeat -- event/event.sh@39 -- # killprocess 3707447 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3707447 ']' 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3707447 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707447 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707447' 00:09:22.026 killing process with pid 3707447 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3707447 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3707447 00:09:22.026 spdk_app_start is called in Round 0. 00:09:22.026 Shutdown signal received, stop current app iteration 00:09:22.026 Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 reinitialization... 00:09:22.026 spdk_app_start is called in Round 1. 00:09:22.026 Shutdown signal received, stop current app iteration 00:09:22.026 Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 reinitialization... 00:09:22.026 spdk_app_start is called in Round 2. 00:09:22.026 Shutdown signal received, stop current app iteration 00:09:22.026 Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 reinitialization... 00:09:22.026 spdk_app_start is called in Round 3. 00:09:22.026 Shutdown signal received, stop current app iteration 00:09:22.026 11:09:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:22.026 11:09:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:22.026 00:09:22.026 real 0m16.425s 00:09:22.026 user 0m35.371s 00:09:22.026 sys 0m3.252s 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.026 11:09:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:22.026 ************************************ 00:09:22.026 END TEST app_repeat 00:09:22.026 ************************************ 00:09:22.026 11:09:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:22.026 11:09:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:22.026 11:09:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.026 11:09:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.026 11:09:02 event -- common/autotest_common.sh@10 -- # set +x 00:09:22.026 ************************************ 00:09:22.026 START TEST cpu_locks 00:09:22.026 ************************************ 00:09:22.026 11:09:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:22.287 * Looking for test storage... 00:09:22.287 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.287 11:09:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:22.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.287 --rc genhtml_branch_coverage=1 00:09:22.287 --rc genhtml_function_coverage=1 00:09:22.287 --rc genhtml_legend=1 00:09:22.287 --rc geninfo_all_blocks=1 00:09:22.287 --rc geninfo_unexecuted_blocks=1 00:09:22.287 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:22.287 ' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:22.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.287 --rc genhtml_branch_coverage=1 00:09:22.287 --rc genhtml_function_coverage=1 00:09:22.287 --rc genhtml_legend=1 00:09:22.287 --rc geninfo_all_blocks=1 00:09:22.287 --rc geninfo_unexecuted_blocks=1 00:09:22.287 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:22.287 ' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:22.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.287 --rc genhtml_branch_coverage=1 00:09:22.287 --rc genhtml_function_coverage=1 00:09:22.287 --rc genhtml_legend=1 00:09:22.287 --rc geninfo_all_blocks=1 00:09:22.287 --rc geninfo_unexecuted_blocks=1 00:09:22.287 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:22.287 ' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:22.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.287 --rc genhtml_branch_coverage=1 00:09:22.287 --rc genhtml_function_coverage=1 00:09:22.287 --rc genhtml_legend=1 00:09:22.287 --rc geninfo_all_blocks=1 00:09:22.287 --rc geninfo_unexecuted_blocks=1 00:09:22.287 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:22.287 ' 00:09:22.287 11:09:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:22.287 11:09:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:22.287 11:09:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:22.287 11:09:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.287 11:09:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.287 ************************************ 00:09:22.287 START TEST default_locks 00:09:22.287 ************************************ 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3710025 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3710025 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3710025 ']' 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.287 11:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.287 [2024-10-15 11:09:02.803134] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:22.287 [2024-10-15 11:09:02.803200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710025 ] 00:09:22.287 [2024-10-15 11:09:02.871917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.546 [2024-10-15 11:09:02.920574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.546 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.547 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:22.547 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3710025 00:09:22.547 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3710025 00:09:22.547 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:23.115 lslocks: write error 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3710025 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3710025 ']' 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3710025 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710025 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710025' 00:09:23.115 killing process with pid 3710025 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3710025 00:09:23.115 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3710025 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3710025 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3710025 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3710025 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3710025 ']' 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.462 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3710025) - No such process 00:09:23.462 ERROR: process (pid: 3710025) is no longer running 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:23.462 00:09:23.462 real 0m1.110s 00:09:23.462 user 0m1.070s 00:09:23.462 sys 0m0.528s 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.462 11:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.462 ************************************ 00:09:23.462 END TEST default_locks 00:09:23.462 ************************************ 00:09:23.462 11:09:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:23.462 11:09:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.462 11:09:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.462 11:09:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.462 ************************************ 00:09:23.462 START TEST default_locks_via_rpc 00:09:23.462 ************************************ 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3710270 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3710270 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3710270 ']' 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.462 11:09:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.462 [2024-10-15 11:09:03.961570] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:23.462 [2024-10-15 11:09:03.961613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710270 ] 00:09:23.462 [2024-10-15 11:09:04.027963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.462 [2024-10-15 11:09:04.077208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.721 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.721 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:23.721 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:23.721 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3710270 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3710270 00:09:23.722 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3710270 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3710270 ']' 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3710270 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.980 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710270 00:09:24.240 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.240 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.240 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710270' 00:09:24.240 killing process with pid 3710270 00:09:24.240 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3710270 00:09:24.240 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3710270 00:09:24.500 00:09:24.500 real 0m0.965s 00:09:24.500 user 0m0.936s 00:09:24.500 sys 0m0.428s 00:09:24.500 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.500 11:09:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.500 ************************************ 00:09:24.500 END TEST default_locks_via_rpc 00:09:24.500 ************************************ 00:09:24.500 11:09:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:24.500 11:09:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.500 11:09:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.500 11:09:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.500 ************************************ 00:09:24.500 START TEST non_locking_app_on_locked_coremask 00:09:24.500 ************************************ 00:09:24.500 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:24.500 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3710699 00:09:24.500 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3710699 /var/tmp/spdk.sock 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3710699 ']' 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.501 11:09:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.501 [2024-10-15 11:09:05.021209] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:24.501 [2024-10-15 11:09:05.021286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710699 ] 00:09:24.501 [2024-10-15 11:09:05.090454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.760 [2024-10-15 11:09:05.138287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3710776 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3710776 /var/tmp/spdk2.sock 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3710776 ']' 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.760 11:09:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.760 [2024-10-15 11:09:05.378316] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:24.760 [2024-10-15 11:09:05.378386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710776 ] 00:09:25.019 [2024-10-15 11:09:05.467599] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.019 [2024-10-15 11:09:05.467632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.019 [2024-10-15 11:09:05.555547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.956 11:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.956 11:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:25.956 11:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3710699 00:09:25.956 11:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3710699 00:09:25.956 11:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.896 lslocks: write error 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3710699 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3710699 ']' 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3710699 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710699 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710699' 00:09:26.896 killing process with pid 3710699 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3710699 00:09:26.896 11:09:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3710699 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3710776 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3710776 ']' 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3710776 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.464 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710776 00:09:27.723 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.723 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.723 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710776' 00:09:27.723 killing process with pid 3710776 00:09:27.723 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3710776 00:09:27.723 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3710776 00:09:27.982 00:09:27.982 real 0m3.432s 00:09:27.982 user 0m3.610s 00:09:27.982 sys 0m1.319s 00:09:27.982 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.982 11:09:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.982 ************************************ 00:09:27.982 END TEST non_locking_app_on_locked_coremask 00:09:27.982 ************************************ 00:09:27.982 11:09:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:27.982 11:09:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.982 11:09:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.982 11:09:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.982 ************************************ 00:09:27.982 START TEST locking_app_on_unlocked_coremask 00:09:27.982 ************************************ 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3711212 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3711212 /var/tmp/spdk.sock 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711212 ']' 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.982 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.982 [2024-10-15 11:09:08.534498] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:27.982 [2024-10-15 11:09:08.534560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711212 ] 00:09:27.982 [2024-10-15 11:09:08.604396] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:27.982 [2024-10-15 11:09:08.604426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.240 [2024-10-15 11:09:08.651386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3711354 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3711354 /var/tmp/spdk2.sock 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711354 ']' 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.240 11:09:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 [2024-10-15 11:09:08.878944] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:28.503 [2024-10-15 11:09:08.879001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711354 ] 00:09:28.504 [2024-10-15 11:09:08.968023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.504 [2024-10-15 11:09:09.055177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.439 11:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.439 11:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:29.439 11:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3711354 00:09:29.439 11:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3711354 00:09:29.439 11:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:30.007 lslocks: write error 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3711212 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3711212 ']' 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3711212 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711212 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711212' 00:09:30.007 killing process with pid 3711212 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3711212 00:09:30.007 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3711212 00:09:30.573 11:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3711354 00:09:30.573 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3711354 ']' 00:09:30.573 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3711354 00:09:30.573 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:30.573 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.573 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711354 00:09:30.574 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.574 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.574 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711354' 00:09:30.574 killing process with pid 3711354 00:09:30.574 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3711354 00:09:30.574 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3711354 00:09:30.832 00:09:30.832 real 0m2.847s 00:09:30.832 user 0m2.992s 00:09:30.832 sys 0m1.003s 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.832 ************************************ 00:09:30.832 END TEST locking_app_on_unlocked_coremask 00:09:30.832 ************************************ 00:09:30.832 11:09:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:30.832 11:09:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.832 11:09:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.832 11:09:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.832 ************************************ 00:09:30.832 START TEST locking_app_on_locked_coremask 00:09:30.832 ************************************ 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3711692 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3711692 /var/tmp/spdk.sock 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711692 ']' 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.832 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.832 [2024-10-15 11:09:11.449323] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:30.832 [2024-10-15 11:09:11.449394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711692 ] 00:09:31.089 [2024-10-15 11:09:11.518641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.089 [2024-10-15 11:09:11.567033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3711769 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3711769 /var/tmp/spdk2.sock 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3711769 /var/tmp/spdk2.sock 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3711769 /var/tmp/spdk2.sock 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711769 ']' 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.347 11:09:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.347 [2024-10-15 11:09:11.782006] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:31.347 [2024-10-15 11:09:11.782093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711769 ] 00:09:31.347 [2024-10-15 11:09:11.868165] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3711692 has claimed it. 00:09:31.347 [2024-10-15 11:09:11.868200] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:31.913 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3711769) - No such process 00:09:31.913 ERROR: process (pid: 3711769) is no longer running 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3711692 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3711692 00:09:31.913 11:09:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.480 lslocks: write error 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3711692 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3711692 ']' 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3711692 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.480 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711692 00:09:32.738 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.738 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.738 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711692' 00:09:32.738 killing process with pid 3711692 00:09:32.738 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3711692 00:09:32.738 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3711692 00:09:32.997 00:09:32.997 real 0m1.983s 00:09:32.997 user 0m2.101s 00:09:32.997 sys 0m0.710s 00:09:32.997 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.997 11:09:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.997 ************************************ 00:09:32.997 END TEST locking_app_on_locked_coremask 00:09:32.997 ************************************ 00:09:32.997 11:09:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:32.997 11:09:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.997 11:09:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.997 11:09:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.997 ************************************ 00:09:32.997 START TEST locking_overlapped_coremask 00:09:32.997 ************************************ 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3711984 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3711984 /var/tmp/spdk.sock 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711984 ']' 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.997 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:32.997 [2024-10-15 11:09:13.501371] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:32.997 [2024-10-15 11:09:13.501427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711984 ] 00:09:32.997 [2024-10-15 11:09:13.568974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.997 [2024-10-15 11:09:13.620008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.997 [2024-10-15 11:09:13.620097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.997 [2024-10-15 11:09:13.620100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3711992 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3711992 /var/tmp/spdk2.sock 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3711992 /var/tmp/spdk2.sock 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3711992 /var/tmp/spdk2.sock 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3711992 ']' 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.256 11:09:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.256 [2024-10-15 11:09:13.862189] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:33.256 [2024-10-15 11:09:13.862255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711992 ] 00:09:33.514 [2024-10-15 11:09:13.954510] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3711984 has claimed it. 00:09:33.514 [2024-10-15 11:09:13.954548] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:34.081 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3711992) - No such process 00:09:34.081 ERROR: process (pid: 3711992) is no longer running 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3711984 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3711984 ']' 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3711984 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711984 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.081 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711984' 00:09:34.082 killing process with pid 3711984 00:09:34.082 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3711984 00:09:34.082 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3711984 00:09:34.340 00:09:34.340 real 0m1.416s 00:09:34.340 user 0m3.920s 00:09:34.340 sys 0m0.431s 00:09:34.340 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.340 11:09:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.340 ************************************ 00:09:34.340 END TEST locking_overlapped_coremask 00:09:34.340 ************************************ 00:09:34.340 11:09:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:34.340 11:09:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.340 11:09:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.340 11:09:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.598 ************************************ 00:09:34.598 START TEST locking_overlapped_coremask_via_rpc 00:09:34.598 ************************************ 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3712194 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3712194 /var/tmp/spdk.sock 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3712194 ']' 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.598 11:09:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.598 [2024-10-15 11:09:15.003779] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:34.598 [2024-10-15 11:09:15.003844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712194 ] 00:09:34.598 [2024-10-15 11:09:15.071568] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.598 [2024-10-15 11:09:15.071596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.598 [2024-10-15 11:09:15.117661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.598 [2024-10-15 11:09:15.117746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.598 [2024-10-15 11:09:15.117748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3712221 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3712221 /var/tmp/spdk2.sock 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3712221 ']' 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.856 11:09:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.856 [2024-10-15 11:09:15.355168] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:34.856 [2024-10-15 11:09:15.355235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712221 ] 00:09:34.856 [2024-10-15 11:09:15.452596] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.856 [2024-10-15 11:09:15.452629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.115 [2024-10-15 11:09:15.556991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.115 [2024-10-15 11:09:15.557103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.115 [2024-10-15 11:09:15.557105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 [2024-10-15 11:09:16.243092] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3712194 has claimed it. 00:09:35.683 request: 00:09:35.683 { 00:09:35.683 "method": "framework_enable_cpumask_locks", 00:09:35.683 "req_id": 1 00:09:35.683 } 00:09:35.683 Got JSON-RPC error response 00:09:35.683 response: 00:09:35.683 { 00:09:35.683 "code": -32603, 00:09:35.683 "message": "Failed to claim CPU core: 2" 00:09:35.683 } 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3712194 /var/tmp/spdk.sock 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3712194 ']' 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.683 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.684 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.684 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3712221 /var/tmp/spdk2.sock 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3712221 ']' 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:35.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.942 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:36.202 00:09:36.202 real 0m1.680s 00:09:36.202 user 0m0.794s 00:09:36.202 sys 0m0.174s 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.202 11:09:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 ************************************ 00:09:36.202 END TEST locking_overlapped_coremask_via_rpc 00:09:36.202 ************************************ 00:09:36.202 11:09:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:36.202 11:09:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3712194 ]] 00:09:36.202 11:09:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3712194 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3712194 ']' 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3712194 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3712194 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3712194' 00:09:36.202 killing process with pid 3712194 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3712194 00:09:36.202 11:09:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3712194 00:09:36.461 11:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3712221 ]] 00:09:36.461 11:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3712221 00:09:36.461 11:09:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3712221 ']' 00:09:36.461 11:09:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3712221 00:09:36.461 11:09:17 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:36.461 11:09:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.461 11:09:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3712221 00:09:36.720 11:09:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:36.720 11:09:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:36.720 11:09:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3712221' 00:09:36.720 killing process with pid 3712221 00:09:36.720 11:09:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3712221 00:09:36.720 11:09:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3712221 00:09:36.979 11:09:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:36.979 11:09:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:36.979 11:09:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3712194 ]] 00:09:36.980 11:09:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3712194 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3712194 ']' 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3712194 00:09:36.980 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3712194) - No such process 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3712194 is not found' 00:09:36.980 Process with pid 3712194 is not found 00:09:36.980 11:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3712221 ]] 00:09:36.980 11:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3712221 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3712221 ']' 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3712221 00:09:36.980 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3712221) - No such process 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3712221 is not found' 00:09:36.980 Process with pid 3712221 is not found 00:09:36.980 11:09:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:36.980 00:09:36.980 real 0m14.875s 00:09:36.980 user 0m25.210s 00:09:36.980 sys 0m5.629s 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.980 11:09:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 ************************************ 00:09:36.980 END TEST cpu_locks 00:09:36.980 ************************************ 00:09:36.980 00:09:36.980 real 0m39.272s 00:09:36.980 user 1m12.837s 00:09:36.980 sys 0m9.987s 00:09:36.980 11:09:17 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.980 11:09:17 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 ************************************ 00:09:36.980 END TEST event 00:09:36.980 ************************************ 00:09:36.980 11:09:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:36.980 11:09:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.980 11:09:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.980 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 ************************************ 00:09:36.980 START TEST thread 00:09:36.980 ************************************ 00:09:36.980 11:09:17 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:37.239 * Looking for test storage... 00:09:37.239 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:37.239 11:09:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.239 11:09:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.239 11:09:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.239 11:09:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.239 11:09:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.239 11:09:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.239 11:09:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.239 11:09:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.239 11:09:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.239 11:09:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.239 11:09:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.239 11:09:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:37.239 11:09:17 thread -- scripts/common.sh@345 -- # : 1 00:09:37.239 11:09:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.239 11:09:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.239 11:09:17 thread -- scripts/common.sh@365 -- # decimal 1 00:09:37.239 11:09:17 thread -- scripts/common.sh@353 -- # local d=1 00:09:37.239 11:09:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.239 11:09:17 thread -- scripts/common.sh@355 -- # echo 1 00:09:37.239 11:09:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.239 11:09:17 thread -- scripts/common.sh@366 -- # decimal 2 00:09:37.239 11:09:17 thread -- scripts/common.sh@353 -- # local d=2 00:09:37.239 11:09:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.239 11:09:17 thread -- scripts/common.sh@355 -- # echo 2 00:09:37.239 11:09:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.239 11:09:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.239 11:09:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.239 11:09:17 thread -- scripts/common.sh@368 -- # return 0 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:37.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.239 --rc genhtml_branch_coverage=1 00:09:37.239 --rc genhtml_function_coverage=1 00:09:37.239 --rc genhtml_legend=1 00:09:37.239 --rc geninfo_all_blocks=1 00:09:37.239 --rc geninfo_unexecuted_blocks=1 00:09:37.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:37.239 ' 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:37.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.239 --rc genhtml_branch_coverage=1 00:09:37.239 --rc genhtml_function_coverage=1 00:09:37.239 --rc genhtml_legend=1 00:09:37.239 --rc geninfo_all_blocks=1 00:09:37.239 --rc geninfo_unexecuted_blocks=1 00:09:37.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:37.239 ' 00:09:37.239 11:09:17 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:37.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.239 --rc genhtml_branch_coverage=1 00:09:37.239 --rc genhtml_function_coverage=1 00:09:37.239 --rc genhtml_legend=1 00:09:37.239 --rc geninfo_all_blocks=1 00:09:37.239 --rc geninfo_unexecuted_blocks=1 00:09:37.239 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:37.239 ' 00:09:37.240 11:09:17 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:37.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.240 --rc genhtml_branch_coverage=1 00:09:37.240 --rc genhtml_function_coverage=1 00:09:37.240 --rc genhtml_legend=1 00:09:37.240 --rc geninfo_all_blocks=1 00:09:37.240 --rc geninfo_unexecuted_blocks=1 00:09:37.240 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:37.240 ' 00:09:37.240 11:09:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:37.240 11:09:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:37.240 11:09:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.240 11:09:17 thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.240 ************************************ 00:09:37.240 START TEST thread_poller_perf 00:09:37.240 ************************************ 00:09:37.240 11:09:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:37.240 [2024-10-15 11:09:17.782217] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:37.240 [2024-10-15 11:09:17.782297] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712660 ] 00:09:37.240 [2024-10-15 11:09:17.852761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.505 [2024-10-15 11:09:17.897777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.505 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:38.446 [2024-10-15T09:09:19.077Z] ====================================== 00:09:38.446 [2024-10-15T09:09:19.077Z] busy:2303093122 (cyc) 00:09:38.446 [2024-10-15T09:09:19.077Z] total_run_count: 817000 00:09:38.446 [2024-10-15T09:09:19.077Z] tsc_hz: 2300000000 (cyc) 00:09:38.446 [2024-10-15T09:09:19.077Z] ====================================== 00:09:38.446 [2024-10-15T09:09:19.077Z] poller_cost: 2818 (cyc), 1225 (nsec) 00:09:38.446 00:09:38.446 real 0m1.176s 00:09:38.446 user 0m1.092s 00:09:38.446 sys 0m0.081s 00:09:38.446 11:09:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.446 11:09:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:38.446 ************************************ 00:09:38.446 END TEST thread_poller_perf 00:09:38.446 ************************************ 00:09:38.446 11:09:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:38.446 11:09:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:38.446 11:09:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.446 11:09:18 thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.446 ************************************ 00:09:38.446 START TEST thread_poller_perf 00:09:38.446 ************************************ 00:09:38.446 11:09:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:38.446 [2024-10-15 11:09:19.022810] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:38.446 [2024-10-15 11:09:19.022889] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712852 ] 00:09:38.705 [2024-10-15 11:09:19.093006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.705 [2024-10-15 11:09:19.136334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.705 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:39.641 [2024-10-15T09:09:20.272Z] ====================================== 00:09:39.641 [2024-10-15T09:09:20.272Z] busy:2301310070 (cyc) 00:09:39.641 [2024-10-15T09:09:20.272Z] total_run_count: 12707000 00:09:39.641 [2024-10-15T09:09:20.272Z] tsc_hz: 2300000000 (cyc) 00:09:39.641 [2024-10-15T09:09:20.272Z] ====================================== 00:09:39.641 [2024-10-15T09:09:20.272Z] poller_cost: 181 (cyc), 78 (nsec) 00:09:39.641 00:09:39.641 real 0m1.174s 00:09:39.641 user 0m1.092s 00:09:39.641 sys 0m0.078s 00:09:39.641 11:09:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.641 11:09:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.641 ************************************ 00:09:39.641 END TEST thread_poller_perf 00:09:39.641 ************************************ 00:09:39.641 11:09:20 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:39.641 11:09:20 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:39.641 11:09:20 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.641 11:09:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.641 11:09:20 thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.641 ************************************ 00:09:39.641 START TEST thread_spdk_lock 00:09:39.641 ************************************ 00:09:39.641 11:09:20 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:39.901 [2024-10-15 11:09:20.276201] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:39.901 [2024-10-15 11:09:20.276284] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713046 ] 00:09:39.901 [2024-10-15 11:09:20.349255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.901 [2024-10-15 11:09:20.397169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.901 [2024-10-15 11:09:20.397171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.470 [2024-10-15 11:09:20.892880] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:40.470 [2024-10-15 11:09:20.892920] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:40.470 [2024-10-15 11:09:20.892931] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2b40 00:09:40.470 [2024-10-15 11:09:20.893653] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:40.470 [2024-10-15 11:09:20.893760] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:40.470 [2024-10-15 11:09:20.893778] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:40.470 Starting test contend 00:09:40.470 Worker Delay Wait us Hold us Total us 00:09:40.470 0 3 173256 187609 360866 00:09:40.470 1 5 90426 289258 379684 00:09:40.470 PASS test contend 00:09:40.470 Starting test hold_by_poller 00:09:40.470 PASS test hold_by_poller 00:09:40.470 Starting test hold_by_message 00:09:40.470 PASS test hold_by_message 00:09:40.470 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:09:40.470 100014 assertions passed 00:09:40.470 0 assertions failed 00:09:40.470 00:09:40.470 real 0m0.675s 00:09:40.470 user 0m1.084s 00:09:40.470 sys 0m0.084s 00:09:40.470 11:09:20 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.470 11:09:20 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 ************************************ 00:09:40.470 END TEST thread_spdk_lock 00:09:40.470 ************************************ 00:09:40.470 00:09:40.470 real 0m3.421s 00:09:40.470 user 0m3.450s 00:09:40.470 sys 0m0.486s 00:09:40.470 11:09:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.470 11:09:20 thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 ************************************ 00:09:40.470 END TEST thread 00:09:40.470 ************************************ 00:09:40.470 11:09:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:40.470 11:09:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:09:40.470 11:09:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.470 11:09:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.470 11:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 ************************************ 00:09:40.470 START TEST app_cmdline 00:09:40.470 ************************************ 00:09:40.470 11:09:21 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:09:40.730 * Looking for test storage... 00:09:40.730 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.730 11:09:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.730 --rc genhtml_branch_coverage=1 00:09:40.730 --rc genhtml_function_coverage=1 00:09:40.730 --rc genhtml_legend=1 00:09:40.730 --rc geninfo_all_blocks=1 00:09:40.730 --rc geninfo_unexecuted_blocks=1 00:09:40.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:40.730 ' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.730 --rc genhtml_branch_coverage=1 00:09:40.730 --rc genhtml_function_coverage=1 00:09:40.730 --rc genhtml_legend=1 00:09:40.730 --rc geninfo_all_blocks=1 00:09:40.730 --rc geninfo_unexecuted_blocks=1 00:09:40.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:40.730 ' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.730 --rc genhtml_branch_coverage=1 00:09:40.730 --rc genhtml_function_coverage=1 00:09:40.730 --rc genhtml_legend=1 00:09:40.730 --rc geninfo_all_blocks=1 00:09:40.730 --rc geninfo_unexecuted_blocks=1 00:09:40.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:40.730 ' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.730 --rc genhtml_branch_coverage=1 00:09:40.730 --rc genhtml_function_coverage=1 00:09:40.730 --rc genhtml_legend=1 00:09:40.730 --rc geninfo_all_blocks=1 00:09:40.730 --rc geninfo_unexecuted_blocks=1 00:09:40.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:40.730 ' 00:09:40.730 11:09:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:40.730 11:09:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3713286 00:09:40.730 11:09:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3713286 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3713286 ']' 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.730 11:09:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:40.730 11:09:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:40.730 [2024-10-15 11:09:21.269644] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:40.730 [2024-10-15 11:09:21.269735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713286 ] 00:09:40.730 [2024-10-15 11:09:21.338196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.990 [2024-10-15 11:09:21.387274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.990 11:09:21 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.990 11:09:21 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:40.990 11:09:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:41.249 { 00:09:41.249 "version": "SPDK v25.01-pre git sha1 35c8daa94", 00:09:41.249 "fields": { 00:09:41.250 "major": 25, 00:09:41.250 "minor": 1, 00:09:41.250 "patch": 0, 00:09:41.250 "suffix": "-pre", 00:09:41.250 "commit": "35c8daa94" 00:09:41.250 } 00:09:41.250 } 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:41.250 11:09:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:09:41.250 11:09:21 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:41.509 request: 00:09:41.509 { 00:09:41.510 "method": "env_dpdk_get_mem_stats", 00:09:41.510 "req_id": 1 00:09:41.510 } 00:09:41.510 Got JSON-RPC error response 00:09:41.510 response: 00:09:41.510 { 00:09:41.510 "code": -32601, 00:09:41.510 "message": "Method not found" 00:09:41.510 } 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.510 11:09:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3713286 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3713286 ']' 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3713286 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3713286 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3713286' 00:09:41.510 killing process with pid 3713286 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@969 -- # kill 3713286 00:09:41.510 11:09:22 app_cmdline -- common/autotest_common.sh@974 -- # wait 3713286 00:09:41.769 00:09:41.769 real 0m1.316s 00:09:41.769 user 0m1.504s 00:09:41.769 sys 0m0.477s 00:09:41.769 11:09:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.769 11:09:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:41.769 ************************************ 00:09:41.769 END TEST app_cmdline 00:09:41.769 ************************************ 00:09:42.029 11:09:22 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:09:42.029 11:09:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.029 11:09:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.029 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:09:42.029 ************************************ 00:09:42.029 START TEST version 00:09:42.029 ************************************ 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:09:42.029 * Looking for test storage... 00:09:42.029 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.029 11:09:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.029 11:09:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.029 11:09:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.029 11:09:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.029 11:09:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.029 11:09:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.029 11:09:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.029 11:09:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.029 11:09:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.029 11:09:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.029 11:09:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.029 11:09:22 version -- scripts/common.sh@344 -- # case "$op" in 00:09:42.029 11:09:22 version -- scripts/common.sh@345 -- # : 1 00:09:42.029 11:09:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.029 11:09:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.029 11:09:22 version -- scripts/common.sh@365 -- # decimal 1 00:09:42.029 11:09:22 version -- scripts/common.sh@353 -- # local d=1 00:09:42.029 11:09:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.029 11:09:22 version -- scripts/common.sh@355 -- # echo 1 00:09:42.029 11:09:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.029 11:09:22 version -- scripts/common.sh@366 -- # decimal 2 00:09:42.029 11:09:22 version -- scripts/common.sh@353 -- # local d=2 00:09:42.029 11:09:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.029 11:09:22 version -- scripts/common.sh@355 -- # echo 2 00:09:42.029 11:09:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.029 11:09:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.029 11:09:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.029 11:09:22 version -- scripts/common.sh@368 -- # return 0 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.029 --rc genhtml_branch_coverage=1 00:09:42.029 --rc genhtml_function_coverage=1 00:09:42.029 --rc genhtml_legend=1 00:09:42.029 --rc geninfo_all_blocks=1 00:09:42.029 --rc geninfo_unexecuted_blocks=1 00:09:42.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.029 ' 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.029 --rc genhtml_branch_coverage=1 00:09:42.029 --rc genhtml_function_coverage=1 00:09:42.029 --rc genhtml_legend=1 00:09:42.029 --rc geninfo_all_blocks=1 00:09:42.029 --rc geninfo_unexecuted_blocks=1 00:09:42.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.029 ' 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.029 --rc genhtml_branch_coverage=1 00:09:42.029 --rc genhtml_function_coverage=1 00:09:42.029 --rc genhtml_legend=1 00:09:42.029 --rc geninfo_all_blocks=1 00:09:42.029 --rc geninfo_unexecuted_blocks=1 00:09:42.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.029 ' 00:09:42.029 11:09:22 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.029 --rc genhtml_branch_coverage=1 00:09:42.029 --rc genhtml_function_coverage=1 00:09:42.029 --rc genhtml_legend=1 00:09:42.029 --rc geninfo_all_blocks=1 00:09:42.029 --rc geninfo_unexecuted_blocks=1 00:09:42.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.029 ' 00:09:42.289 11:09:22 version -- app/version.sh@17 -- # get_header_version major 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # cut -f2 00:09:42.289 11:09:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.289 11:09:22 version -- app/version.sh@17 -- # major=25 00:09:42.289 11:09:22 version -- app/version.sh@18 -- # get_header_version minor 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # cut -f2 00:09:42.289 11:09:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.289 11:09:22 version -- app/version.sh@18 -- # minor=1 00:09:42.289 11:09:22 version -- app/version.sh@19 -- # get_header_version patch 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # cut -f2 00:09:42.289 11:09:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.289 11:09:22 version -- app/version.sh@19 -- # patch=0 00:09:42.289 11:09:22 version -- app/version.sh@20 -- # get_header_version suffix 00:09:42.289 11:09:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # cut -f2 00:09:42.289 11:09:22 version -- app/version.sh@14 -- # tr -d '"' 00:09:42.289 11:09:22 version -- app/version.sh@20 -- # suffix=-pre 00:09:42.289 11:09:22 version -- app/version.sh@22 -- # version=25.1 00:09:42.289 11:09:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:42.289 11:09:22 version -- app/version.sh@28 -- # version=25.1rc0 00:09:42.289 11:09:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:42.289 11:09:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:42.289 11:09:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:42.289 11:09:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:42.289 00:09:42.289 real 0m0.276s 00:09:42.289 user 0m0.149s 00:09:42.289 sys 0m0.177s 00:09:42.289 11:09:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.289 11:09:22 version -- common/autotest_common.sh@10 -- # set +x 00:09:42.289 ************************************ 00:09:42.289 END TEST version 00:09:42.289 ************************************ 00:09:42.289 11:09:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@194 -- # uname -s 00:09:42.289 11:09:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:42.289 11:09:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.289 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:09:42.289 11:09:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:09:42.289 11:09:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:09:42.289 11:09:22 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:09:42.289 11:09:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.289 11:09:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.289 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:09:42.289 ************************************ 00:09:42.289 START TEST llvm_fuzz 00:09:42.289 ************************************ 00:09:42.289 11:09:22 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:09:42.548 * Looking for test storage... 00:09:42.549 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:09:42.549 11:09:22 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.549 11:09:22 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.549 11:09:22 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.549 11:09:23 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.549 --rc genhtml_branch_coverage=1 00:09:42.549 --rc genhtml_function_coverage=1 00:09:42.549 --rc genhtml_legend=1 00:09:42.549 --rc geninfo_all_blocks=1 00:09:42.549 --rc geninfo_unexecuted_blocks=1 00:09:42.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.549 ' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.549 --rc genhtml_branch_coverage=1 00:09:42.549 --rc genhtml_function_coverage=1 00:09:42.549 --rc genhtml_legend=1 00:09:42.549 --rc geninfo_all_blocks=1 00:09:42.549 --rc geninfo_unexecuted_blocks=1 00:09:42.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.549 ' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.549 --rc genhtml_branch_coverage=1 00:09:42.549 --rc genhtml_function_coverage=1 00:09:42.549 --rc genhtml_legend=1 00:09:42.549 --rc geninfo_all_blocks=1 00:09:42.549 --rc geninfo_unexecuted_blocks=1 00:09:42.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.549 ' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.549 --rc genhtml_branch_coverage=1 00:09:42.549 --rc genhtml_function_coverage=1 00:09:42.549 --rc genhtml_legend=1 00:09:42.549 --rc geninfo_all_blocks=1 00:09:42.549 --rc geninfo_unexecuted_blocks=1 00:09:42.549 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.549 ' 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:42.549 11:09:23 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.549 11:09:23 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:42.549 ************************************ 00:09:42.549 START TEST nvmf_llvm_fuzz 00:09:42.549 ************************************ 00:09:42.549 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:09:42.549 * Looking for test storage... 00:09:42.811 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.811 --rc genhtml_branch_coverage=1 00:09:42.811 --rc genhtml_function_coverage=1 00:09:42.811 --rc genhtml_legend=1 00:09:42.811 --rc geninfo_all_blocks=1 00:09:42.811 --rc geninfo_unexecuted_blocks=1 00:09:42.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.811 ' 00:09:42.811 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.811 --rc genhtml_branch_coverage=1 00:09:42.811 --rc genhtml_function_coverage=1 00:09:42.811 --rc genhtml_legend=1 00:09:42.811 --rc geninfo_all_blocks=1 00:09:42.811 --rc geninfo_unexecuted_blocks=1 00:09:42.811 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.811 ' 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.812 --rc genhtml_branch_coverage=1 00:09:42.812 --rc genhtml_function_coverage=1 00:09:42.812 --rc genhtml_legend=1 00:09:42.812 --rc geninfo_all_blocks=1 00:09:42.812 --rc geninfo_unexecuted_blocks=1 00:09:42.812 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.812 ' 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.812 --rc genhtml_branch_coverage=1 00:09:42.812 --rc genhtml_function_coverage=1 00:09:42.812 --rc genhtml_legend=1 00:09:42.812 --rc geninfo_all_blocks=1 00:09:42.812 --rc geninfo_unexecuted_blocks=1 00:09:42.812 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:42.812 ' 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:09:42.812 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:42.813 #define SPDK_CONFIG_H 00:09:42.813 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:42.813 #define SPDK_CONFIG_APPS 1 00:09:42.813 #define SPDK_CONFIG_ARCH native 00:09:42.813 #undef SPDK_CONFIG_ASAN 00:09:42.813 #undef SPDK_CONFIG_AVAHI 00:09:42.813 #undef SPDK_CONFIG_CET 00:09:42.813 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:42.813 #define SPDK_CONFIG_COVERAGE 1 00:09:42.813 #define SPDK_CONFIG_CROSS_PREFIX 00:09:42.813 #undef SPDK_CONFIG_CRYPTO 00:09:42.813 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:42.813 #undef SPDK_CONFIG_CUSTOMOCF 00:09:42.813 #undef SPDK_CONFIG_DAOS 00:09:42.813 #define SPDK_CONFIG_DAOS_DIR 00:09:42.813 #define SPDK_CONFIG_DEBUG 1 00:09:42.813 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:42.813 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:42.813 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:42.813 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:42.813 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:42.813 #undef SPDK_CONFIG_DPDK_UADK 00:09:42.813 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:42.813 #define SPDK_CONFIG_EXAMPLES 1 00:09:42.813 #undef SPDK_CONFIG_FC 00:09:42.813 #define SPDK_CONFIG_FC_PATH 00:09:42.813 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:42.813 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:42.813 #define SPDK_CONFIG_FSDEV 1 00:09:42.813 #undef SPDK_CONFIG_FUSE 00:09:42.813 #define SPDK_CONFIG_FUZZER 1 00:09:42.813 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:42.813 #undef SPDK_CONFIG_GOLANG 00:09:42.813 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:42.813 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:42.813 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:42.813 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:42.813 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:42.813 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:42.813 #undef SPDK_CONFIG_HAVE_LZ4 00:09:42.813 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:42.813 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:42.813 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:42.813 #define SPDK_CONFIG_IDXD 1 00:09:42.813 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:42.813 #undef SPDK_CONFIG_IPSEC_MB 00:09:42.813 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:42.813 #define SPDK_CONFIG_ISAL 1 00:09:42.813 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:42.813 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:42.813 #define SPDK_CONFIG_LIBDIR 00:09:42.813 #undef SPDK_CONFIG_LTO 00:09:42.813 #define SPDK_CONFIG_MAX_LCORES 128 00:09:42.813 #define SPDK_CONFIG_NVME_CUSE 1 00:09:42.813 #undef SPDK_CONFIG_OCF 00:09:42.813 #define SPDK_CONFIG_OCF_PATH 00:09:42.813 #define SPDK_CONFIG_OPENSSL_PATH 00:09:42.813 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:42.813 #define SPDK_CONFIG_PGO_DIR 00:09:42.813 #undef SPDK_CONFIG_PGO_USE 00:09:42.813 #define SPDK_CONFIG_PREFIX /usr/local 00:09:42.813 #undef SPDK_CONFIG_RAID5F 00:09:42.813 #undef SPDK_CONFIG_RBD 00:09:42.813 #define SPDK_CONFIG_RDMA 1 00:09:42.813 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:42.813 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:42.813 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:42.813 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:42.813 #undef SPDK_CONFIG_SHARED 00:09:42.813 #undef SPDK_CONFIG_SMA 00:09:42.813 #define SPDK_CONFIG_TESTS 1 00:09:42.813 #undef SPDK_CONFIG_TSAN 00:09:42.813 #define SPDK_CONFIG_UBLK 1 00:09:42.813 #define SPDK_CONFIG_UBSAN 1 00:09:42.813 #undef SPDK_CONFIG_UNIT_TESTS 00:09:42.813 #undef SPDK_CONFIG_URING 00:09:42.813 #define SPDK_CONFIG_URING_PATH 00:09:42.813 #undef SPDK_CONFIG_URING_ZNS 00:09:42.813 #undef SPDK_CONFIG_USDT 00:09:42.813 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:42.813 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:42.813 #define SPDK_CONFIG_VFIO_USER 1 00:09:42.813 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:42.813 #define SPDK_CONFIG_VHOST 1 00:09:42.813 #define SPDK_CONFIG_VIRTIO 1 00:09:42.813 #undef SPDK_CONFIG_VTUNE 00:09:42.813 #define SPDK_CONFIG_VTUNE_DIR 00:09:42.813 #define SPDK_CONFIG_WERROR 1 00:09:42.813 #define SPDK_CONFIG_WPDK_DIR 00:09:42.813 #undef SPDK_CONFIG_XNVME 00:09:42.813 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:42.813 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:42.814 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3713646 ]] 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3713646 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:42.815 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.PUTc01 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.PUTc01/tests/nvmf /tmp/spdk.PUTc01 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86820696064 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500372480 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7679676416 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245422592 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894340096 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5734400 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249801216 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=385024 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450024960 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450037248 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:42.816 * Looking for test storage... 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86820696064 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9894268928 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.816 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:09:42.816 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:42.817 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.081 --rc genhtml_branch_coverage=1 00:09:43.081 --rc genhtml_function_coverage=1 00:09:43.081 --rc genhtml_legend=1 00:09:43.081 --rc geninfo_all_blocks=1 00:09:43.081 --rc geninfo_unexecuted_blocks=1 00:09:43.081 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:43.081 ' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.081 --rc genhtml_branch_coverage=1 00:09:43.081 --rc genhtml_function_coverage=1 00:09:43.081 --rc genhtml_legend=1 00:09:43.081 --rc geninfo_all_blocks=1 00:09:43.081 --rc geninfo_unexecuted_blocks=1 00:09:43.081 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:43.081 ' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.081 --rc genhtml_branch_coverage=1 00:09:43.081 --rc genhtml_function_coverage=1 00:09:43.081 --rc genhtml_legend=1 00:09:43.081 --rc geninfo_all_blocks=1 00:09:43.081 --rc geninfo_unexecuted_blocks=1 00:09:43.081 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:43.081 ' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.081 --rc genhtml_branch_coverage=1 00:09:43.081 --rc genhtml_function_coverage=1 00:09:43.081 --rc genhtml_legend=1 00:09:43.081 --rc geninfo_all_blocks=1 00:09:43.081 --rc geninfo_unexecuted_blocks=1 00:09:43.081 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:43.081 ' 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:09:43.081 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:43.082 11:09:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:09:43.082 [2024-10-15 11:09:23.586369] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:43.082 [2024-10-15 11:09:23.586442] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713859 ] 00:09:43.406 [2024-10-15 11:09:23.845001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.406 [2024-10-15 11:09:23.899414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.406 [2024-10-15 11:09:23.958289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.406 [2024-10-15 11:09:23.974461] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:09:43.406 INFO: Running with entropic power schedule (0xFF, 100). 00:09:43.406 INFO: Seed: 2171283754 00:09:43.406 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:43.406 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:43.406 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:43.406 INFO: A corpus is not provided, starting from an empty corpus 00:09:43.406 #2 INITED exec/s: 0 rss: 65Mb 00:09:43.406 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:43.406 This may also happen if the target rejected all inputs we tried so far 00:09:43.406 [2024-10-15 11:09:24.029978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.406 [2024-10-15 11:09:24.030011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.961 NEW_FUNC[1/713]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:09:43.961 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:43.961 #8 NEW cov: 12147 ft: 12140 corp: 2/101b lim: 320 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:43.961 [2024-10-15 11:09:24.370998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.961 [2024-10-15 11:09:24.371072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.961 #9 NEW cov: 12260 ft: 12899 corp: 3/201b lim: 320 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 ShuffleBytes- 00:09:43.961 [2024-10-15 11:09:24.440941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.961 [2024-10-15 11:09:24.440969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.961 #10 NEW cov: 12266 ft: 13197 corp: 4/288b lim: 320 exec/s: 0 rss: 74Mb L: 87/100 MS: 1 InsertRepeatedBytes- 00:09:43.961 [2024-10-15 11:09:24.481002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:63000000 cdw11:00000000 00:09:43.961 [2024-10-15 11:09:24.481042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.961 #11 NEW cov: 12351 ft: 13552 corp: 5/375b lim: 320 exec/s: 0 rss: 74Mb L: 87/100 MS: 1 ChangeByte- 00:09:43.961 [2024-10-15 11:09:24.541208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.961 [2024-10-15 11:09:24.541236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.961 #12 NEW cov: 12351 ft: 13625 corp: 6/475b lim: 320 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 ChangeBit- 00:09:44.267 [2024-10-15 11:09:24.581268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.267 [2024-10-15 11:09:24.581303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 NEW_FUNC[1/1]: 0x1935ac8 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:09:44.267 #13 NEW cov: 12391 ft: 13848 corp: 7/544b lim: 320 exec/s: 0 rss: 74Mb L: 69/100 MS: 1 InsertRepeatedBytes- 00:09:44.267 [2024-10-15 11:09:24.621459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00008000 00:09:44.267 [2024-10-15 11:09:24.621486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 #14 NEW cov: 12391 ft: 13886 corp: 8/644b lim: 320 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 ChangeBit- 00:09:44.267 [2024-10-15 11:09:24.681574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:4a4a4a4a cdw10:4a4a4a4a cdw11:4a4a4a4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x4a4a4a4a4a4a4a4a 00:09:44.267 [2024-10-15 11:09:24.681601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 NEW_FUNC[1/1]: 0x14fd728 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:09:44.267 #17 NEW cov: 12422 ft: 13997 corp: 9/712b lim: 320 exec/s: 0 rss: 74Mb L: 68/100 MS: 3 CMP-ChangeBit-InsertRepeatedBytes- DE: "\377\377\377\377"- 00:09:44.267 [2024-10-15 11:09:24.721929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b7b7b7b7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.267 [2024-10-15 11:09:24.721956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 [2024-10-15 11:09:24.722021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b7) qid:0 cid:5 nsid:b7b7b7b7 cdw10:b7b7b7b7 cdw11:b7b7b7b7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb7b7b7b7b7b7b7b7 00:09:44.267 [2024-10-15 11:09:24.722042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.267 [2024-10-15 11:09:24.722103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b7) qid:0 cid:6 nsid:b7b7b7b7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb7b7b7b7b7b7b7b7 00:09:44.267 [2024-10-15 11:09:24.722117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:44.267 #18 NEW cov: 12422 ft: 14311 corp: 10/909b lim: 320 exec/s: 0 rss: 74Mb L: 197/197 MS: 1 InsertRepeatedBytes- 00:09:44.267 [2024-10-15 11:09:24.781882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.267 [2024-10-15 11:09:24.781908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 #19 NEW cov: 12422 ft: 14401 corp: 11/1000b lim: 320 exec/s: 0 rss: 74Mb L: 91/197 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:09:44.267 [2024-10-15 11:09:24.821971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:63000000 cdw11:00000000 00:09:44.267 [2024-10-15 11:09:24.821997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.267 #20 NEW cov: 12422 ft: 14488 corp: 12/1087b lim: 320 exec/s: 0 rss: 74Mb L: 87/197 MS: 1 ChangeBinInt- 00:09:44.267 [2024-10-15 11:09:24.882134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:4a4a4a4a cdw10:4a4a4a4a cdw11:4a4a4a4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x4a4a4a4a4a4a4a4a 00:09:44.267 [2024-10-15 11:09:24.882161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.553 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:44.553 #21 NEW cov: 12445 ft: 14550 corp: 13/1155b lim: 320 exec/s: 0 rss: 74Mb L: 68/197 MS: 1 ShuffleBytes- 00:09:44.553 [2024-10-15 11:09:24.942330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:00008000 00:09:44.553 [2024-10-15 11:09:24.942356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.553 #22 NEW cov: 12445 ft: 14577 corp: 14/1255b lim: 320 exec/s: 0 rss: 74Mb L: 100/197 MS: 1 ChangeByte- 00:09:44.553 [2024-10-15 11:09:25.002479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.553 [2024-10-15 11:09:25.002503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.553 #23 NEW cov: 12445 ft: 14612 corp: 15/1355b lim: 320 exec/s: 23 rss: 74Mb L: 100/197 MS: 1 ShuffleBytes- 00:09:44.553 [2024-10-15 11:09:25.042687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:63636363 cdw10:63636363 cdw11:63636363 00:09:44.553 [2024-10-15 11:09:25.042713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.553 [2024-10-15 11:09:25.042776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (63) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.553 [2024-10-15 11:09:25.042791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.553 #24 NEW cov: 12445 ft: 14738 corp: 16/1512b lim: 320 exec/s: 24 rss: 74Mb L: 157/197 MS: 1 InsertRepeatedBytes- 00:09:44.553 [2024-10-15 11:09:25.103097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b7b7b7b7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.554 [2024-10-15 11:09:25.103123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.554 [2024-10-15 11:09:25.103188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b7) qid:0 cid:5 nsid:b7b7b7b7 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1212121212121212 00:09:44.554 [2024-10-15 11:09:25.103202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.554 [2024-10-15 11:09:25.103264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (12) qid:0 cid:6 nsid:12121212 cdw10:b7b7b7b7 cdw11:b7b7b7b7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb7b7b7b7b7b7b7b7 00:09:44.554 [2024-10-15 11:09:25.103278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:44.554 [2024-10-15 11:09:25.103342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b7) qid:0 cid:7 nsid:b7b7b7b7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb7b7b7b7b7b7b7b7 00:09:44.554 [2024-10-15 11:09:25.103357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:44.554 #25 NEW cov: 12446 ft: 15008 corp: 17/1773b lim: 320 exec/s: 25 rss: 74Mb L: 261/261 MS: 1 InsertRepeatedBytes- 00:09:44.554 [2024-10-15 11:09:25.162923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000006d cdw11:00000000 00:09:44.554 [2024-10-15 11:09:25.162950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 #26 NEW cov: 12446 ft: 15022 corp: 18/1874b lim: 320 exec/s: 26 rss: 74Mb L: 101/261 MS: 1 InsertByte- 00:09:44.812 [2024-10-15 11:09:25.203043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.812 [2024-10-15 11:09:25.203087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 #31 NEW cov: 12446 ft: 15053 corp: 19/1965b lim: 320 exec/s: 31 rss: 74Mb L: 91/261 MS: 5 InsertByte-CopyPart-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:09:44.812 [2024-10-15 11:09:25.243244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:63636363 cdw10:6363e4f9 cdw11:63636363 00:09:44.812 [2024-10-15 11:09:25.243269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 [2024-10-15 11:09:25.243349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (63) qid:0 cid:5 nsid:63636363 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:44.812 [2024-10-15 11:09:25.243364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.812 #32 NEW cov: 12446 ft: 15116 corp: 20/2130b lim: 320 exec/s: 32 rss: 74Mb L: 165/261 MS: 1 CMP- DE: "\377*\256E\344\375\371\344"- 00:09:44.812 [2024-10-15 11:09:25.303300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:00008000 00:09:44.812 [2024-10-15 11:09:25.303325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 #33 NEW cov: 12446 ft: 15131 corp: 21/2230b lim: 320 exec/s: 33 rss: 75Mb L: 100/261 MS: 1 ShuffleBytes- 00:09:44.812 [2024-10-15 11:09:25.363555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.812 [2024-10-15 11:09:25.363581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 #34 NEW cov: 12446 ft: 15166 corp: 22/2321b lim: 320 exec/s: 34 rss: 75Mb L: 91/261 MS: 1 ChangeBit- 00:09:44.812 [2024-10-15 11:09:25.403734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:44.812 [2024-10-15 11:09:25.403760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.812 [2024-10-15 11:09:25.403821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:5 nsid:2c2c2c2c cdw10:2c2c2c2c cdw11:2c2c2c2c 00:09:44.812 [2024-10-15 11:09:25.403835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.812 #35 NEW cov: 12446 ft: 15191 corp: 23/2512b lim: 320 exec/s: 35 rss: 75Mb L: 191/261 MS: 1 InsertRepeatedBytes- 00:09:45.071 [2024-10-15 11:09:25.443751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:46ae2b cdw10:00000000 cdw11:00000000 00:09:45.071 [2024-10-15 11:09:25.443777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.071 #36 NEW cov: 12446 ft: 15275 corp: 24/2603b lim: 320 exec/s: 36 rss: 75Mb L: 91/261 MS: 1 CMP- DE: "\000+\256F\000\316\244\264"- 00:09:45.071 [2024-10-15 11:09:25.483805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:45.071 [2024-10-15 11:09:25.483830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.071 #37 NEW cov: 12446 ft: 15282 corp: 25/2703b lim: 320 exec/s: 37 rss: 75Mb L: 100/261 MS: 1 ChangeByte- 00:09:45.071 [2024-10-15 11:09:25.544011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:4a4a4a4a cdw10:4a4a4a4a cdw11:4a4a4a4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x4a4a4a4a 00:09:45.071 [2024-10-15 11:09:25.544053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.071 #38 NEW cov: 12446 ft: 15327 corp: 26/2775b lim: 320 exec/s: 38 rss: 75Mb L: 72/261 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:45.071 [2024-10-15 11:09:25.604201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:46ae2b cdw10:00000000 cdw11:00000000 00:09:45.071 [2024-10-15 11:09:25.604227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.071 #39 NEW cov: 12446 ft: 15342 corp: 27/2866b lim: 320 exec/s: 39 rss: 75Mb L: 91/261 MS: 1 ChangeBit- 00:09:45.071 [2024-10-15 11:09:25.664474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:45.071 [2024-10-15 11:09:25.664501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.071 [2024-10-15 11:09:25.664570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 00:09:45.071 [2024-10-15 11:09:25.664584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.330 #40 NEW cov: 12446 ft: 15370 corp: 28/3004b lim: 320 exec/s: 40 rss: 75Mb L: 138/261 MS: 1 CopyPart- 00:09:45.330 [2024-10-15 11:09:25.724495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:46ae2b cdw10:00000000 cdw11:00000000 00:09:45.330 [2024-10-15 11:09:25.724520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.330 #41 NEW cov: 12446 ft: 15386 corp: 29/3101b lim: 320 exec/s: 41 rss: 75Mb L: 97/261 MS: 1 CopyPart- 00:09:45.330 [2024-10-15 11:09:25.764604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:45.330 [2024-10-15 11:09:25.764631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.331 #42 NEW cov: 12446 ft: 15410 corp: 30/3170b lim: 320 exec/s: 42 rss: 75Mb L: 69/261 MS: 1 ChangeByte- 00:09:45.331 [2024-10-15 11:09:25.805050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:63636363 cdw10:63636363 cdw11:63636363 00:09:45.331 [2024-10-15 11:09:25.805077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.331 [2024-10-15 11:09:25.805141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (63) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63636363 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:45.331 [2024-10-15 11:09:25.805155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.331 [2024-10-15 11:09:25.805220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (63) qid:0 cid:6 nsid:63636363 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x6363636363636363 00:09:45.331 [2024-10-15 11:09:25.805234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.331 [2024-10-15 11:09:25.805287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffffff00 00:09:45.331 [2024-10-15 11:09:25.805301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:45.331 #43 NEW cov: 12446 ft: 15495 corp: 31/3479b lim: 320 exec/s: 43 rss: 75Mb L: 309/309 MS: 1 CopyPart- 00:09:45.331 [2024-10-15 11:09:25.845045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x7f7f7f7f7f7f7f7f 00:09:45.331 [2024-10-15 11:09:25.845076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.331 [2024-10-15 11:09:25.845141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.331 [2024-10-15 11:09:25.845155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.331 NEW_FUNC[1/1]: 0x1936638 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:09:45.331 #44 NEW cov: 12468 ft: 15849 corp: 32/3652b lim: 320 exec/s: 44 rss: 75Mb L: 173/309 MS: 1 InsertRepeatedBytes- 00:09:45.331 [2024-10-15 11:09:25.905282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f887f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x7f7f7f7f7f7f7f7f 00:09:45.331 [2024-10-15 11:09:25.905309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.331 [2024-10-15 11:09:25.905389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.331 [2024-10-15 11:09:25.905403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.331 #45 NEW cov: 12468 ft: 15880 corp: 33/3825b lim: 320 exec/s: 45 rss: 75Mb L: 173/309 MS: 1 ChangeBinInt- 00:09:45.590 [2024-10-15 11:09:25.965291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:45.590 [2024-10-15 11:09:25.965317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.590 [2024-10-15 11:09:25.965394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:5 nsid:2c2c2c2c cdw10:2c2c2c2c cdw11:2c2c2c2c 00:09:45.590 [2024-10-15 11:09:25.965409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.590 [2024-10-15 11:09:26.025453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:45.590 [2024-10-15 11:09:26.025480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.590 [2024-10-15 11:09:26.025540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:5 nsid:2c2c2c2c cdw10:2c2c2c2c cdw11:2c2c2c2c 00:09:45.590 [2024-10-15 11:09:26.025554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.590 #47 NEW cov: 12468 ft: 15901 corp: 34/4016b lim: 320 exec/s: 23 rss: 75Mb L: 191/309 MS: 2 ChangeBit-CMP- DE: "\001+\256FP\034\274."- 00:09:45.590 #47 DONE cov: 12468 ft: 15901 corp: 34/4016b lim: 320 exec/s: 23 rss: 75Mb 00:09:45.590 ###### Recommended dictionary. ###### 00:09:45.590 "\377\377\377\377" # Uses: 1 00:09:45.590 "\377*\256E\344\375\371\344" # Uses: 0 00:09:45.590 "\000+\256F\000\316\244\264" # Uses: 0 00:09:45.590 "\000\000\000\000" # Uses: 0 00:09:45.590 "\001+\256FP\034\274." # Uses: 0 00:09:45.590 ###### End of recommended dictionary. ###### 00:09:45.590 Done 47 runs in 2 second(s) 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:45.590 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:45.591 11:09:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:09:45.591 [2024-10-15 11:09:26.199716] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:45.591 [2024-10-15 11:09:26.199785] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714228 ] 00:09:45.849 [2024-10-15 11:09:26.453391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.109 [2024-10-15 11:09:26.503348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.109 [2024-10-15 11:09:26.562255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.109 [2024-10-15 11:09:26.578404] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:09:46.109 INFO: Running with entropic power schedule (0xFF, 100). 00:09:46.109 INFO: Seed: 478301969 00:09:46.109 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:46.109 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:46.109 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:46.109 INFO: A corpus is not provided, starting from an empty corpus 00:09:46.109 #2 INITED exec/s: 0 rss: 66Mb 00:09:46.109 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:46.109 This may also happen if the target rejected all inputs we tried so far 00:09:46.109 [2024-10-15 11:09:26.627427] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:46.109 [2024-10-15 11:09:26.627677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.109 [2024-10-15 11:09:26.627708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.368 NEW_FUNC[1/714]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:09:46.368 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:46.368 #3 NEW cov: 12262 ft: 12255 corp: 2/7b lim: 30 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:09:46.368 [2024-10-15 11:09:26.948240] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:46.368 [2024-10-15 11:09:26.948513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.368 [2024-10-15 11:09:26.948553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.368 NEW_FUNC[1/1]: 0x1982178 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1541 00:09:46.368 #4 NEW cov: 12377 ft: 12794 corp: 3/13b lim: 30 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:46.627 [2024-10-15 11:09:27.008327] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11232) > buf size (4096) 00:09:46.627 [2024-10-15 11:09:27.008545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af700ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.627 [2024-10-15 11:09:27.008572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.627 #5 NEW cov: 12383 ft: 13085 corp: 4/19b lim: 30 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 ChangeBinInt- 00:09:46.627 [2024-10-15 11:09:27.048390] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002a2a 00:09:46.627 [2024-10-15 11:09:27.048627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00022a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.627 [2024-10-15 11:09:27.048652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.627 #6 NEW cov: 12474 ft: 13345 corp: 5/29b lim: 30 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:46.627 [2024-10-15 11:09:27.088502] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796852) > buf size (4096) 00:09:46.627 [2024-10-15 11:09:27.088724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2c83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.627 [2024-10-15 11:09:27.088750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.627 #7 NEW cov: 12474 ft: 13487 corp: 6/36b lim: 30 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 InsertByte- 00:09:46.627 [2024-10-15 11:09:27.148678] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:46.627 [2024-10-15 11:09:27.148917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2c83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.627 [2024-10-15 11:09:27.148943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.627 #8 NEW cov: 12474 ft: 13574 corp: 7/43b lim: 30 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 ChangeBit- 00:09:46.627 [2024-10-15 11:09:27.208824] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:46.627 [2024-10-15 11:09:27.209076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.627 [2024-10-15 11:09:27.209102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.627 #9 NEW cov: 12474 ft: 13664 corp: 8/49b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 CopyPart- 00:09:46.886 [2024-10-15 11:09:27.269186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.269213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.886 #10 NEW cov: 12491 ft: 13850 corp: 9/55b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ShuffleBytes- 00:09:46.886 [2024-10-15 11:09:27.309067] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (64516) > buf size (4096) 00:09:46.886 [2024-10-15 11:09:27.309291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.309317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.886 #11 NEW cov: 12491 ft: 13933 corp: 10/61b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:09:46.886 [2024-10-15 11:09:27.369276] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:46.886 [2024-10-15 11:09:27.369494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2c83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.369520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.886 #17 NEW cov: 12491 ft: 14032 corp: 11/71b lim: 30 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:09:46.886 [2024-10-15 11:09:27.429414] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:46.886 [2024-10-15 11:09:27.429639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.429665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.886 #18 NEW cov: 12491 ft: 14048 corp: 12/77b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 CopyPart- 00:09:46.886 [2024-10-15 11:09:27.469564] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:46.886 [2024-10-15 11:09:27.469706] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (1536) > len (4) 00:09:46.886 [2024-10-15 11:09:27.469922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.469949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.886 [2024-10-15 11:09:27.470006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.886 [2024-10-15 11:09:27.470022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:46.886 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:46.886 #19 NEW cov: 12520 ft: 14475 corp: 13/91b lim: 30 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\006"- 00:09:47.145 [2024-10-15 11:09:27.529716] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.145 [2024-10-15 11:09:27.529938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2c83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.145 [2024-10-15 11:09:27.529964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.145 #20 NEW cov: 12520 ft: 14508 corp: 14/101b lim: 30 exec/s: 0 rss: 74Mb L: 10/14 MS: 1 ChangeByte- 00:09:47.146 [2024-10-15 11:09:27.589893] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261164) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.590137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.590163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.146 #21 NEW cov: 12520 ft: 14563 corp: 15/107b lim: 30 exec/s: 21 rss: 74Mb L: 6/14 MS: 1 ShuffleBytes- 00:09:47.146 [2024-10-15 11:09:27.630006] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.630151] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (1536) > len (4) 00:09:47.146 [2024-10-15 11:09:27.630370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.630397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.146 [2024-10-15 11:09:27.630456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.630472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.146 #22 NEW cov: 12520 ft: 14573 corp: 16/124b lim: 30 exec/s: 22 rss: 74Mb L: 17/17 MS: 1 CopyPart- 00:09:47.146 [2024-10-15 11:09:27.690248] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.690366] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6ff 00:09:47.146 [2024-10-15 11:09:27.690482] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (567996) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.690593] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145412) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.690812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.690838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.146 [2024-10-15 11:09:27.690897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.690911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.146 [2024-10-15 11:09:27.690967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:2aae0247 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.690981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.146 [2024-10-15 11:09:27.691039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.691069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.146 #23 NEW cov: 12520 ft: 15182 corp: 17/149b lim: 30 exec/s: 23 rss: 75Mb L: 25/25 MS: 1 CMP- DE: "\377*\256G\246d\024\216"- 00:09:47.146 [2024-10-15 11:09:27.750327] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047724) > buf size (4096) 00:09:47.146 [2024-10-15 11:09:27.750567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff2a83ae cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.146 [2024-10-15 11:09:27.750593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.146 #27 NEW cov: 12520 ft: 15191 corp: 18/160b lim: 30 exec/s: 27 rss: 75Mb L: 11/25 MS: 4 EraseBytes-ChangeByte-ChangeByte-PersAutoDict- DE: "\377*\256G\246d\024\216"- 00:09:47.408 [2024-10-15 11:09:27.790625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.790651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.408 #28 NEW cov: 12520 ft: 15201 corp: 19/166b lim: 30 exec/s: 28 rss: 75Mb L: 6/25 MS: 1 ShuffleBytes- 00:09:47.408 [2024-10-15 11:09:27.830539] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786476) > buf size (4096) 00:09:47.408 [2024-10-15 11:09:27.830780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a83f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.830806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.408 #29 NEW cov: 12520 ft: 15313 corp: 20/173b lim: 30 exec/s: 29 rss: 75Mb L: 7/25 MS: 1 CopyPart- 00:09:47.408 [2024-10-15 11:09:27.870661] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.408 [2024-10-15 11:09:27.870903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3383f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.870929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.408 #30 NEW cov: 12520 ft: 15353 corp: 21/183b lim: 30 exec/s: 30 rss: 75Mb L: 10/25 MS: 1 ChangeBinInt- 00:09:47.408 [2024-10-15 11:09:27.930900] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.408 [2024-10-15 11:09:27.931243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.931269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.408 [2024-10-15 11:09:27.931327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.931342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.408 #31 NEW cov: 12520 ft: 15360 corp: 22/197b lim: 30 exec/s: 31 rss: 75Mb L: 14/25 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\006"- 00:09:47.408 [2024-10-15 11:09:27.970951] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262188) > buf size (4096) 00:09:47.408 [2024-10-15 11:09:27.971175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a81f7 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:27.971201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.408 #32 NEW cov: 12520 ft: 15401 corp: 23/204b lim: 30 exec/s: 32 rss: 75Mb L: 7/25 MS: 1 ChangeByte- 00:09:47.408 [2024-10-15 11:09:28.031348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000003f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.408 [2024-10-15 11:09:28.031374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 #34 NEW cov: 12520 ft: 15411 corp: 24/210b lim: 30 exec/s: 34 rss: 75Mb L: 6/25 MS: 2 EraseBytes-InsertByte- 00:09:47.668 [2024-10-15 11:09:28.071232] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11232) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.071471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af700ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.071497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 #35 NEW cov: 12520 ft: 15417 corp: 25/216b lim: 30 exec/s: 35 rss: 75Mb L: 6/25 MS: 1 EraseBytes- 00:09:47.668 [2024-10-15 11:09:28.131594] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.131716] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6ff 00:09:47.668 [2024-10-15 11:09:28.131831] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (567996) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.131945] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145412) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.132286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.132312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 [2024-10-15 11:09:28.132371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.132389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.668 [2024-10-15 11:09:28.132444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:2aae0247 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.132459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.668 [2024-10-15 11:09:28.132516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:8e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.132531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.668 [2024-10-15 11:09:28.132584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.132599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:47.668 #36 NEW cov: 12520 ft: 15524 corp: 26/246b lim: 30 exec/s: 36 rss: 75Mb L: 30/30 MS: 1 CopyPart- 00:09:47.668 [2024-10-15 11:09:28.191608] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047724) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.191834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff2a83ae cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.191859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 #37 NEW cov: 12520 ft: 15540 corp: 27/254b lim: 30 exec/s: 37 rss: 75Mb L: 8/30 MS: 1 EraseBytes- 00:09:47.668 [2024-10-15 11:09:28.252000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.252025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 #38 NEW cov: 12520 ft: 15554 corp: 28/260b lim: 30 exec/s: 38 rss: 75Mb L: 6/30 MS: 1 ChangeByte- 00:09:47.668 [2024-10-15 11:09:28.291892] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.668 [2024-10-15 11:09:28.292033] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a 00:09:47.668 [2024-10-15 11:09:28.292266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.292292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.668 [2024-10-15 11:09:28.292348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.668 [2024-10-15 11:09:28.292363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.927 #39 NEW cov: 12520 ft: 15560 corp: 29/275b lim: 30 exec/s: 39 rss: 75Mb L: 15/30 MS: 1 InsertByte- 00:09:47.927 [2024-10-15 11:09:28.352115] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002cf7 00:09:47.927 [2024-10-15 11:09:28.352251] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:09:47.927 [2024-10-15 11:09:28.352468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a2c02f7 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.352494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.927 [2024-10-15 11:09:28.352551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.352568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.927 #40 NEW cov: 12520 ft: 15595 corp: 30/289b lim: 30 exec/s: 40 rss: 75Mb L: 14/30 MS: 1 CopyPart- 00:09:47.927 [2024-10-15 11:09:28.392233] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7e 00:09:47.927 [2024-10-15 11:09:28.392354] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6 00:09:47.927 [2024-10-15 11:09:28.392671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.392697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.927 [2024-10-15 11:09:28.392751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.392767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.927 [2024-10-15 11:09:28.392823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.392838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.927 #41 NEW cov: 12520 ft: 15813 corp: 31/307b lim: 30 exec/s: 41 rss: 75Mb L: 18/30 MS: 1 InsertByte- 00:09:47.927 [2024-10-15 11:09:28.432379] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.927 [2024-10-15 11:09:28.432516] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.927 [2024-10-15 11:09:28.432630] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (10752) > len (4) 00:09:47.927 [2024-10-15 11:09:28.432849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3383f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.432875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.927 [2024-10-15 11:09:28.432935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a3383f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.432950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.927 [2024-10-15 11:09:28.433006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.927 [2024-10-15 11:09:28.433021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.927 #42 NEW cov: 12520 ft: 15829 corp: 32/325b lim: 30 exec/s: 42 rss: 75Mb L: 18/30 MS: 1 CopyPart- 00:09:47.927 [2024-10-15 11:09:28.492609] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.927 [2024-10-15 11:09:28.492729] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000002 00:09:47.927 [2024-10-15 11:09:28.492842] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1047724) > buf size (4096) 00:09:47.927 [2024-10-15 11:09:28.492955] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (21052) > buf size (4096) 00:09:47.928 [2024-10-15 11:09:28.493201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a3383f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.493227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.928 [2024-10-15 11:09:28.493285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a3383f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.493303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.928 [2024-10-15 11:09:28.493359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff2a83ae cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.493373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.928 [2024-10-15 11:09:28.493428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:148e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.493442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.928 #43 NEW cov: 12520 ft: 15876 corp: 33/351b lim: 30 exec/s: 43 rss: 75Mb L: 26/30 MS: 1 PersAutoDict- DE: "\377*\256G\246d\024\216"- 00:09:47.928 [2024-10-15 11:09:28.552718] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:09:47.928 [2024-10-15 11:09:28.552840] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:09:47.928 [2024-10-15 11:09:28.553163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.553189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.928 [2024-10-15 11:09:28.553246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.553261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.928 [2024-10-15 11:09:28.553318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.928 [2024-10-15 11:09:28.553332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:48.187 #44 NEW cov: 12520 ft: 15894 corp: 34/373b lim: 30 exec/s: 44 rss: 75Mb L: 22/30 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\006"- 00:09:48.187 [2024-10-15 11:09:28.592769] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000012 00:09:48.187 [2024-10-15 11:09:28.592909] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200001212 00:09:48.187 [2024-10-15 11:09:28.593149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.187 [2024-10-15 11:09:28.593185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.187 [2024-10-15 11:09:28.593240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:12120212 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.187 [2024-10-15 11:09:28.593255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:48.187 #46 NEW cov: 12520 ft: 15898 corp: 35/388b lim: 30 exec/s: 23 rss: 75Mb L: 15/30 MS: 2 EraseBytes-InsertRepeatedBytes- 00:09:48.187 #46 DONE cov: 12520 ft: 15898 corp: 35/388b lim: 30 exec/s: 23 rss: 75Mb 00:09:48.187 ###### Recommended dictionary. ###### 00:09:48.187 "\000\000\000\000\000\000\000\006" # Uses: 2 00:09:48.187 "\377*\256G\246d\024\216" # Uses: 2 00:09:48.187 ###### End of recommended dictionary. ###### 00:09:48.187 Done 46 runs in 2 second(s) 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:48.187 11:09:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:09:48.187 [2024-10-15 11:09:28.770694] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:48.187 [2024-10-15 11:09:28.770762] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714567 ] 00:09:48.446 [2024-10-15 11:09:29.027511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.706 [2024-10-15 11:09:29.082245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.706 [2024-10-15 11:09:29.141360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.706 [2024-10-15 11:09:29.157519] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:09:48.706 INFO: Running with entropic power schedule (0xFF, 100). 00:09:48.706 INFO: Seed: 3058323483 00:09:48.706 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:48.706 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:48.706 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:48.706 INFO: A corpus is not provided, starting from an empty corpus 00:09:48.706 #2 INITED exec/s: 0 rss: 66Mb 00:09:48.706 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:48.706 This may also happen if the target rejected all inputs we tried so far 00:09:48.706 [2024-10-15 11:09:29.203035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:012b004a cdw11:0300ae48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.706 [2024-10-15 11:09:29.203066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.965 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:09:48.965 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:48.965 #12 NEW cov: 12203 ft: 12195 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 5 ChangeByte-ChangeBit-ShuffleBytes-CrossOver-CMP- DE: "\001+\256H\003=R\242"- 00:09:48.965 [2024-10-15 11:09:29.543901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:012b004a cdw11:0100ae4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:48.965 [2024-10-15 11:09:29.543938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.965 #18 NEW cov: 12316 ft: 12743 corp: 3/21b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:09:49.225 [2024-10-15 11:09:29.604401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.604429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.604502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.604517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.604575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.604590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.604646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.604661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.225 #20 NEW cov: 12322 ft: 13697 corp: 4/55b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:49.225 #23 NEW cov: 12407 ft: 14058 corp: 5/67b lim: 35 exec/s: 0 rss: 74Mb L: 12/34 MS: 3 ChangeBit-CopyPart-CrossOver- 00:09:49.225 [2024-10-15 11:09:29.684760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.684786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.684861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.684876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.684932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:40003f40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.684946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.685002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.685016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.225 [2024-10-15 11:09:29.685073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.685087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:49.225 #24 NEW cov: 12407 ft: 14175 corp: 6/102b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:09:49.225 [2024-10-15 11:09:29.744369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:012b004a cdw11:0300ae48 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.225 [2024-10-15 11:09:29.744398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.225 #25 NEW cov: 12407 ft: 14296 corp: 7/112b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 PersAutoDict- DE: "\001+\256H\003=R\242"- 00:09:49.225 #29 NEW cov: 12407 ft: 14385 corp: 8/123b lim: 35 exec/s: 0 rss: 74Mb L: 11/35 MS: 4 ChangeBit-InsertByte-InsertByte-PersAutoDict- DE: "\001+\256H\003=R\242"- 00:09:49.484 #30 NEW cov: 12407 ft: 14797 corp: 9/134b lim: 35 exec/s: 0 rss: 74Mb L: 11/35 MS: 1 ShuffleBytes- 00:09:49.484 [2024-10-15 11:09:29.884993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:524a003d cdw11:4000a240 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.484 [2024-10-15 11:09:29.885020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.484 #36 NEW cov: 12407 ft: 14940 corp: 10/149b lim: 35 exec/s: 0 rss: 74Mb L: 15/35 MS: 1 CrossOver- 00:09:49.484 [2024-10-15 11:09:29.945071] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:49.484 [2024-10-15 11:09:29.945206] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:49.484 [2024-10-15 11:09:29.945493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3d520003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.484 [2024-10-15 11:09:29.945521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:29.945579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:29.945596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:29.945650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:29.945668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.485 #37 NEW cov: 12425 ft: 15208 corp: 11/178b lim: 35 exec/s: 0 rss: 74Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:09:49.485 [2024-10-15 11:09:29.995207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:012b004a cdw11:0100ae4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:29.995233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:29.995292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a20a0052 cdw11:ff00feff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:29.995308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.485 #38 NEW cov: 12425 ft: 15364 corp: 12/196b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:09:49.485 [2024-10-15 11:09:30.055331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:b63a0085 cdw11:ae00012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:30.055363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.485 #39 NEW cov: 12425 ft: 15386 corp: 13/207b lim: 35 exec/s: 0 rss: 74Mb L: 11/35 MS: 1 ChangeByte- 00:09:49.485 [2024-10-15 11:09:30.095841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:30.095875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:30.095932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:30.095951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:30.096011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:30.096025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.485 [2024-10-15 11:09:30.096101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.485 [2024-10-15 11:09:30.096115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.744 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:49.744 #40 NEW cov: 12448 ft: 15450 corp: 14/241b lim: 35 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:09:49.744 [2024-10-15 11:09:30.136018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.136048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.136122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.136137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.136194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:bf00c1bf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.136208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.136265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bfbf00bf cdw11:4000b640 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.136279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.136334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.136348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:49.744 #41 NEW cov: 12448 ft: 15463 corp: 15/276b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:49.744 [2024-10-15 11:09:30.195802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ae4a004a cdw11:5200013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.195828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.195886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.195901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.744 #42 NEW cov: 12448 ft: 15480 corp: 16/294b lim: 35 exec/s: 42 rss: 74Mb L: 18/35 MS: 1 CopyPart- 00:09:49.744 [2024-10-15 11:09:30.255942] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:49.744 [2024-10-15 11:09:30.256078] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:49.744 [2024-10-15 11:09:30.256372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:033d0048 cdw11:00005200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.256405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.256463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.256479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.744 [2024-10-15 11:09:30.256536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.256553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.744 #48 NEW cov: 12448 ft: 15537 corp: 17/324b lim: 35 exec/s: 48 rss: 74Mb L: 30/35 MS: 1 InsertByte- 00:09:49.744 [2024-10-15 11:09:30.315981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:b63a0085 cdw11:ae00012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.744 [2024-10-15 11:09:30.316007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.744 #49 NEW cov: 12448 ft: 15552 corp: 18/335b lim: 35 exec/s: 49 rss: 75Mb L: 11/35 MS: 1 CrossOver- 00:09:50.003 #50 NEW cov: 12448 ft: 15597 corp: 19/346b lim: 35 exec/s: 50 rss: 75Mb L: 11/35 MS: 1 ShuffleBytes- 00:09:50.003 [2024-10-15 11:09:30.416423] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.003 [2024-10-15 11:09:30.416567] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.003 [2024-10-15 11:09:30.416857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:033d0048 cdw11:b4005200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.416883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.003 [2024-10-15 11:09:30.416942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.416959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.003 [2024-10-15 11:09:30.417015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.417034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.003 #51 NEW cov: 12448 ft: 15624 corp: 20/377b lim: 35 exec/s: 51 rss: 75Mb L: 31/35 MS: 1 InsertByte- 00:09:50.003 #52 NEW cov: 12448 ft: 15645 corp: 21/389b lim: 35 exec/s: 52 rss: 75Mb L: 12/35 MS: 1 ChangeByte- 00:09:50.003 [2024-10-15 11:09:30.517039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.517065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.003 [2024-10-15 11:09:30.517123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.517138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.003 [2024-10-15 11:09:30.517195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:bf00c1bf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.517209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.003 [2024-10-15 11:09:30.517265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bf4000bf cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.003 [2024-10-15 11:09:30.517282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.003 #53 NEW cov: 12448 ft: 15737 corp: 22/422b lim: 35 exec/s: 53 rss: 75Mb L: 33/35 MS: 1 EraseBytes- 00:09:50.004 [2024-10-15 11:09:30.576954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ae48002b cdw11:5200033d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.004 [2024-10-15 11:09:30.576991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.004 #54 NEW cov: 12448 ft: 15744 corp: 23/436b lim: 35 exec/s: 54 rss: 75Mb L: 14/35 MS: 1 InsertRepeatedBytes- 00:09:50.263 #55 NEW cov: 12448 ft: 15756 corp: 24/447b lim: 35 exec/s: 55 rss: 75Mb L: 11/35 MS: 1 ShuffleBytes- 00:09:50.263 [2024-10-15 11:09:30.657136] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.263 [2024-10-15 11:09:30.657274] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.263 [2024-10-15 11:09:30.657565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3d520003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.657594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.263 [2024-10-15 11:09:30.657653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.657670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.263 [2024-10-15 11:09:30.657726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.657743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.263 #56 NEW cov: 12448 ft: 15767 corp: 25/476b lim: 35 exec/s: 56 rss: 75Mb L: 29/35 MS: 1 ChangeByte- 00:09:50.263 [2024-10-15 11:09:30.697124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:012b004a cdw11:a200ae52 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.697149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.263 #57 NEW cov: 12448 ft: 15794 corp: 26/486b lim: 35 exec/s: 57 rss: 75Mb L: 10/35 MS: 1 ShuffleBytes- 00:09:50.263 [2024-10-15 11:09:30.757674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4040004a cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.757699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.263 [2024-10-15 11:09:30.757773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.757788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.263 [2024-10-15 11:09:30.757847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.757860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.263 [2024-10-15 11:09:30.757920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.757934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.263 #58 NEW cov: 12448 ft: 15850 corp: 27/520b lim: 35 exec/s: 58 rss: 75Mb L: 34/35 MS: 1 ChangeByte- 00:09:50.263 [2024-10-15 11:09:30.797580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:52a2003d cdw11:3d000a03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.263 [2024-10-15 11:09:30.797607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.263 #59 NEW cov: 12448 ft: 15882 corp: 28/535b lim: 35 exec/s: 59 rss: 75Mb L: 15/35 MS: 1 CopyPart- 00:09:50.263 #60 NEW cov: 12448 ft: 15890 corp: 29/546b lim: 35 exec/s: 60 rss: 75Mb L: 11/35 MS: 1 CopyPart- 00:09:50.522 [2024-10-15 11:09:30.897665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:b63a0085 cdw11:ae00012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.522 [2024-10-15 11:09:30.897691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.522 #61 NEW cov: 12448 ft: 15899 corp: 30/557b lim: 35 exec/s: 61 rss: 75Mb L: 11/35 MS: 1 ChangeBit- 00:09:50.522 [2024-10-15 11:09:30.958020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:524a003d cdw11:4000a240 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.522 [2024-10-15 11:09:30.958051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.522 #62 NEW cov: 12448 ft: 15923 corp: 31/572b lim: 35 exec/s: 62 rss: 75Mb L: 15/35 MS: 1 ChangeBinInt- 00:09:50.522 [2024-10-15 11:09:31.018020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:014a004a cdw11:a2000152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.522 [2024-10-15 11:09:31.018051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.522 #63 NEW cov: 12448 ft: 15964 corp: 32/582b lim: 35 exec/s: 63 rss: 75Mb L: 10/35 MS: 1 CopyPart- 00:09:50.522 [2024-10-15 11:09:31.078376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a252003d cdw11:3d000a03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.522 [2024-10-15 11:09:31.078402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.522 #64 NEW cov: 12448 ft: 15983 corp: 33/597b lim: 35 exec/s: 64 rss: 75Mb L: 15/35 MS: 1 ShuffleBytes- 00:09:50.522 [2024-10-15 11:09:31.138324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0103004a cdw11:a2003d52 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.522 [2024-10-15 11:09:31.138350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.782 #65 NEW cov: 12448 ft: 16023 corp: 34/604b lim: 35 exec/s: 65 rss: 75Mb L: 7/35 MS: 1 EraseBytes- 00:09:50.782 [2024-10-15 11:09:31.178563] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.782 [2024-10-15 11:09:31.178701] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:50.782 [2024-10-15 11:09:31.178996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:033d0048 cdw11:00005200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.782 [2024-10-15 11:09:31.179022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.782 [2024-10-15 11:09:31.179087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.782 [2024-10-15 11:09:31.179103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.782 [2024-10-15 11:09:31.179160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:02000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:50.782 [2024-10-15 11:09:31.179177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.782 #66 NEW cov: 12448 ft: 16033 corp: 35/634b lim: 35 exec/s: 33 rss: 75Mb L: 30/35 MS: 1 ChangeBit- 00:09:50.782 #66 DONE cov: 12448 ft: 16033 corp: 35/634b lim: 35 exec/s: 33 rss: 75Mb 00:09:50.782 ###### Recommended dictionary. ###### 00:09:50.782 "\001+\256H\003=R\242" # Uses: 2 00:09:50.782 "\376\377\377\377\000\000\000\000" # Uses: 0 00:09:50.782 ###### End of recommended dictionary. ###### 00:09:50.782 Done 66 runs in 2 second(s) 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:50.782 11:09:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:09:50.782 [2024-10-15 11:09:31.353992] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:50.782 [2024-10-15 11:09:31.354072] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714882 ] 00:09:51.041 [2024-10-15 11:09:31.614210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.041 [2024-10-15 11:09:31.662425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.300 [2024-10-15 11:09:31.721297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.300 [2024-10-15 11:09:31.737446] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:09:51.300 INFO: Running with entropic power schedule (0xFF, 100). 00:09:51.300 INFO: Seed: 1344344334 00:09:51.300 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:51.300 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:51.300 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:51.300 INFO: A corpus is not provided, starting from an empty corpus 00:09:51.300 #2 INITED exec/s: 0 rss: 66Mb 00:09:51.300 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:51.300 This may also happen if the target rejected all inputs we tried so far 00:09:51.557 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:09:51.557 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:51.557 #5 NEW cov: 12087 ft: 12088 corp: 2/16b lim: 20 exec/s: 0 rss: 73Mb L: 15/15 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:09:51.557 [2024-10-15 11:09:32.144592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.557 [2024-10-15 11:09:32.144655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.817 NEW_FUNC[1/17]: 0x1337078 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:09:51.817 NEW_FUNC[2/17]: 0x1337bf8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:09:51.817 #6 NEW cov: 12488 ft: 13329 corp: 3/36b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:51.817 #7 NEW cov: 12494 ft: 13613 corp: 4/51b lim: 20 exec/s: 0 rss: 73Mb L: 15/20 MS: 1 ChangeBit- 00:09:51.817 #12 NEW cov: 12580 ft: 14116 corp: 5/62b lim: 20 exec/s: 0 rss: 73Mb L: 11/20 MS: 5 ShuffleBytes-ChangeBit-ChangeByte-CopyPart-InsertRepeatedBytes- 00:09:51.817 #13 NEW cov: 12580 ft: 14195 corp: 6/74b lim: 20 exec/s: 0 rss: 73Mb L: 12/20 MS: 1 CrossOver- 00:09:51.817 #15 NEW cov: 12580 ft: 14337 corp: 7/86b lim: 20 exec/s: 0 rss: 73Mb L: 12/20 MS: 2 ChangeByte-CrossOver- 00:09:51.817 [2024-10-15 11:09:32.394955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.817 [2024-10-15 11:09:32.394989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.817 #16 NEW cov: 12580 ft: 14435 corp: 8/106b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:52.076 #17 NEW cov: 12580 ft: 14458 corp: 9/126b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:52.076 [2024-10-15 11:09:32.515286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.076 [2024-10-15 11:09:32.515317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.076 NEW_FUNC[1/2]: 0x14ad238 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:09:52.076 NEW_FUNC[2/2]: 0x14d4cb8 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3702 00:09:52.076 #18 NEW cov: 12637 ft: 14603 corp: 10/145b lim: 20 exec/s: 0 rss: 74Mb L: 19/20 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:52.076 #19 NEW cov: 12637 ft: 14656 corp: 11/157b lim: 20 exec/s: 0 rss: 74Mb L: 12/20 MS: 1 InsertByte- 00:09:52.076 #20 NEW cov: 12637 ft: 14699 corp: 12/171b lim: 20 exec/s: 0 rss: 74Mb L: 14/20 MS: 1 CopyPart- 00:09:52.076 [2024-10-15 11:09:32.665799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.076 [2024-10-15 11:09:32.665830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.334 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:52.335 #21 NEW cov: 12660 ft: 14775 corp: 13/190b lim: 20 exec/s: 0 rss: 74Mb L: 19/20 MS: 1 ChangeBinInt- 00:09:52.335 #22 NEW cov: 12660 ft: 15061 corp: 14/194b lim: 20 exec/s: 0 rss: 74Mb L: 4/20 MS: 1 CrossOver- 00:09:52.335 #23 NEW cov: 12660 ft: 15098 corp: 15/205b lim: 20 exec/s: 23 rss: 74Mb L: 11/20 MS: 1 EraseBytes- 00:09:52.335 #24 NEW cov: 12660 ft: 15115 corp: 16/217b lim: 20 exec/s: 24 rss: 74Mb L: 12/20 MS: 1 ChangeBinInt- 00:09:52.335 [2024-10-15 11:09:32.886390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.335 [2024-10-15 11:09:32.886422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.335 #25 NEW cov: 12660 ft: 15190 corp: 17/229b lim: 20 exec/s: 25 rss: 74Mb L: 12/20 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:52.594 #26 NEW cov: 12660 ft: 15225 corp: 18/249b lim: 20 exec/s: 26 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:09:52.594 #27 NEW cov: 12660 ft: 15243 corp: 19/261b lim: 20 exec/s: 27 rss: 74Mb L: 12/20 MS: 1 ShuffleBytes- 00:09:52.594 #29 NEW cov: 12660 ft: 15255 corp: 20/278b lim: 20 exec/s: 29 rss: 74Mb L: 17/20 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:52.594 #30 NEW cov: 12660 ft: 15270 corp: 21/297b lim: 20 exec/s: 30 rss: 74Mb L: 19/20 MS: 1 CopyPart- 00:09:52.594 #31 NEW cov: 12660 ft: 15281 corp: 22/301b lim: 20 exec/s: 31 rss: 74Mb L: 4/20 MS: 1 ShuffleBytes- 00:09:52.853 #32 NEW cov: 12660 ft: 15293 corp: 23/316b lim: 20 exec/s: 32 rss: 74Mb L: 15/20 MS: 1 InsertRepeatedBytes- 00:09:52.853 [2024-10-15 11:09:33.247476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.853 [2024-10-15 11:09:33.247504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.853 #33 NEW cov: 12660 ft: 15368 corp: 24/336b lim: 20 exec/s: 33 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:09:52.853 #34 NEW cov: 12660 ft: 15436 corp: 25/355b lim: 20 exec/s: 34 rss: 75Mb L: 19/20 MS: 1 ChangeBinInt- 00:09:52.853 #35 NEW cov: 12660 ft: 15443 corp: 26/375b lim: 20 exec/s: 35 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:09:52.853 #36 NEW cov: 12660 ft: 15495 corp: 27/389b lim: 20 exec/s: 36 rss: 75Mb L: 14/20 MS: 1 ChangeBit- 00:09:53.113 #37 NEW cov: 12660 ft: 15502 corp: 28/402b lim: 20 exec/s: 37 rss: 75Mb L: 13/20 MS: 1 InsertByte- 00:09:53.113 #38 NEW cov: 12660 ft: 15517 corp: 29/417b lim: 20 exec/s: 38 rss: 75Mb L: 15/20 MS: 1 EraseBytes- 00:09:53.113 #39 NEW cov: 12660 ft: 15523 corp: 30/433b lim: 20 exec/s: 39 rss: 75Mb L: 16/20 MS: 1 InsertByte- 00:09:53.113 #40 NEW cov: 12660 ft: 15575 corp: 31/448b lim: 20 exec/s: 40 rss: 75Mb L: 15/20 MS: 1 ChangeBinInt- 00:09:53.113 #41 NEW cov: 12660 ft: 15606 corp: 32/468b lim: 20 exec/s: 41 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:09:53.372 #42 NEW cov: 12660 ft: 15612 corp: 33/478b lim: 20 exec/s: 42 rss: 76Mb L: 10/20 MS: 1 CrossOver- 00:09:53.372 #43 NEW cov: 12660 ft: 15619 corp: 34/498b lim: 20 exec/s: 21 rss: 76Mb L: 20/20 MS: 1 ChangeBinInt- 00:09:53.372 #43 DONE cov: 12660 ft: 15619 corp: 34/498b lim: 20 exec/s: 21 rss: 76Mb 00:09:53.372 ###### Recommended dictionary. ###### 00:09:53.372 "\001\000\000\000\000\000\000\000" # Uses: 2 00:09:53.372 ###### End of recommended dictionary. ###### 00:09:53.372 Done 43 runs in 2 second(s) 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:53.372 11:09:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:09:53.372 [2024-10-15 11:09:33.984776] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:53.372 [2024-10-15 11:09:33.984845] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715225 ] 00:09:53.631 [2024-10-15 11:09:34.238569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.890 [2024-10-15 11:09:34.293501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.891 [2024-10-15 11:09:34.352658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.891 [2024-10-15 11:09:34.368802] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:09:53.891 INFO: Running with entropic power schedule (0xFF, 100). 00:09:53.891 INFO: Seed: 3975363502 00:09:53.891 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:53.891 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:53.891 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:53.891 INFO: A corpus is not provided, starting from an empty corpus 00:09:53.891 #2 INITED exec/s: 0 rss: 66Mb 00:09:53.891 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:53.891 This may also happen if the target rejected all inputs we tried so far 00:09:53.891 [2024-10-15 11:09:34.424469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.891 [2024-10-15 11:09:34.424498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.891 [2024-10-15 11:09:34.424569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.891 [2024-10-15 11:09:34.424584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.150 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:09:54.150 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:54.150 #21 NEW cov: 12224 ft: 12221 corp: 2/17b lim: 35 exec/s: 0 rss: 73Mb L: 16/16 MS: 4 CrossOver-InsertByte-CopyPart-InsertRepeatedBytes- 00:09:54.150 [2024-10-15 11:09:34.745327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a22400a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.150 [2024-10-15 11:09:34.745365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.150 [2024-10-15 11:09:34.745432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.150 [2024-10-15 11:09:34.745447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.150 #23 NEW cov: 12337 ft: 12851 corp: 3/32b lim: 35 exec/s: 0 rss: 73Mb L: 15/16 MS: 2 ChangeByte-CrossOver- 00:09:54.409 [2024-10-15 11:09:34.785390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.785419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.409 [2024-10-15 11:09:34.785474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.785488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.409 #24 NEW cov: 12343 ft: 13093 corp: 4/48b lim: 35 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 ChangeBit- 00:09:54.409 [2024-10-15 11:09:34.845517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1616b216 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.845545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.409 [2024-10-15 11:09:34.845618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.845632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.409 #27 NEW cov: 12428 ft: 13466 corp: 5/63b lim: 35 exec/s: 0 rss: 73Mb L: 15/16 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:09:54.409 [2024-10-15 11:09:34.885611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.885638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.409 [2024-10-15 11:09:34.885709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.885724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.409 #28 NEW cov: 12428 ft: 13568 corp: 6/80b lim: 35 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertByte- 00:09:54.409 [2024-10-15 11:09:34.945765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.945791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.409 [2024-10-15 11:09:34.945846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.945860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.409 #29 NEW cov: 12428 ft: 13601 corp: 7/96b lim: 35 exec/s: 0 rss: 73Mb L: 16/17 MS: 1 ChangeASCIIInt- 00:09:54.409 [2024-10-15 11:09:34.985851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.985876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.409 [2024-10-15 11:09:34.985947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.409 [2024-10-15 11:09:34.985962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.409 #30 NEW cov: 12428 ft: 13674 corp: 8/112b lim: 35 exec/s: 0 rss: 73Mb L: 16/17 MS: 1 ChangeBinInt- 00:09:54.410 [2024-10-15 11:09:35.025959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.410 [2024-10-15 11:09:35.025984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.410 [2024-10-15 11:09:35.026044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.410 [2024-10-15 11:09:35.026059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 #31 NEW cov: 12428 ft: 13749 corp: 9/128b lim: 35 exec/s: 0 rss: 74Mb L: 16/17 MS: 1 ShuffleBytes- 00:09:54.669 [2024-10-15 11:09:35.086141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.086166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.086238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.086252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 #32 NEW cov: 12428 ft: 13767 corp: 10/144b lim: 35 exec/s: 0 rss: 74Mb L: 16/17 MS: 1 ChangeBinInt- 00:09:54.669 [2024-10-15 11:09:35.126407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.126435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.126491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.126505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.126558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.126572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.669 #35 NEW cov: 12428 ft: 14045 corp: 11/171b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 3 ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:09:54.669 [2024-10-15 11:09:35.166378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.166405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.166461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.166476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 #36 NEW cov: 12428 ft: 14080 corp: 12/188b lim: 35 exec/s: 0 rss: 74Mb L: 17/27 MS: 1 CopyPart- 00:09:54.669 [2024-10-15 11:09:35.226513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:3a220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.226540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.226594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.226611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 #37 NEW cov: 12428 ft: 14092 corp: 13/204b lim: 35 exec/s: 0 rss: 74Mb L: 16/27 MS: 1 ChangeByte- 00:09:54.669 [2024-10-15 11:09:35.287040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22b60a0a cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.287066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.287120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b6b6b6b6 cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.287133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.287204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2222b6b6 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.287218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.669 [2024-10-15 11:09:35.287273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:22232222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.669 [2024-10-15 11:09:35.287287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:54.929 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:54.929 #38 NEW cov: 12451 ft: 14464 corp: 14/234b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:09:54.929 [2024-10-15 11:09:35.327180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22b60a0a cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.327206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.327277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b6b6b6b6 cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.327292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.327347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2222b6b6 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.327360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.327416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:22232222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.327430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:54.929 #39 NEW cov: 12451 ft: 14478 corp: 15/265b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertByte- 00:09:54.929 [2024-10-15 11:09:35.387393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.387419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.387476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.387490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.387546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.387562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.387619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:22220a22 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.387633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:54.929 #40 NEW cov: 12451 ft: 14491 corp: 16/299b lim: 35 exec/s: 40 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:54.929 [2024-10-15 11:09:35.447495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.447521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.447594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.447609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.447664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00002237 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.447678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.447731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.447744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:54.929 #41 NEW cov: 12451 ft: 14521 corp: 17/333b lim: 35 exec/s: 41 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:54.929 [2024-10-15 11:09:35.507323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1616b216 cdw11:16f10003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.507348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.929 [2024-10-15 11:09:35.507421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1616e9e9 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.929 [2024-10-15 11:09:35.507435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.929 #42 NEW cov: 12451 ft: 14547 corp: 18/348b lim: 35 exec/s: 42 rss: 74Mb L: 15/34 MS: 1 ChangeBinInt- 00:09:55.189 [2024-10-15 11:09:35.567805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.567830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.567903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.567918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.567974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.567987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.568047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:22231000 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.568065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.189 #43 NEW cov: 12451 ft: 14562 corp: 19/377b lim: 35 exec/s: 43 rss: 74Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:09:55.189 [2024-10-15 11:09:35.608066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.608092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.608164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.608179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.608233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.608247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.608304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:24220a22 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.608318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.608372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.608386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:55.189 #44 NEW cov: 12451 ft: 14633 corp: 20/412b lim: 35 exec/s: 44 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:09:55.189 [2024-10-15 11:09:35.667750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.667776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.667848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.667863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 #45 NEW cov: 12451 ft: 14648 corp: 21/429b lim: 35 exec/s: 45 rss: 74Mb L: 17/35 MS: 1 CrossOver- 00:09:55.189 [2024-10-15 11:09:35.708176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.708201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.708273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c7ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.708287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.708342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.708355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.708409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.708426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.189 #46 NEW cov: 12451 ft: 14675 corp: 22/459b lim: 35 exec/s: 46 rss: 74Mb L: 30/35 MS: 1 InsertByte- 00:09:55.189 [2024-10-15 11:09:35.768066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.768092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.768164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10222200 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.768178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 #47 NEW cov: 12451 ft: 14731 corp: 23/475b lim: 35 exec/s: 47 rss: 74Mb L: 16/35 MS: 1 ChangeBinInt- 00:09:55.189 [2024-10-15 11:09:35.808461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22b60a0a cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.808486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.808558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b6b6b6b6 cdw11:b6b60001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.808573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.808628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2222b6b6 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.808642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.189 [2024-10-15 11:09:35.808698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:22232222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-15 11:09:35.808712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.450 #48 NEW cov: 12451 ft: 14744 corp: 24/505b lim: 35 exec/s: 48 rss: 74Mb L: 30/35 MS: 1 ShuffleBytes- 00:09:55.450 [2024-10-15 11:09:35.848657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.848684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.848741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c7ffefff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.848755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.848810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.848824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.848879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.848893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.450 #49 NEW cov: 12451 ft: 14794 corp: 25/535b lim: 35 exec/s: 49 rss: 74Mb L: 30/35 MS: 1 ChangeBit- 00:09:55.450 [2024-10-15 11:09:35.908807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.908833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.908906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c7ffefff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.908921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.908974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.908989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.909050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.909064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.450 #50 NEW cov: 12451 ft: 14803 corp: 26/565b lim: 35 exec/s: 50 rss: 75Mb L: 30/35 MS: 1 ChangeASCIIInt- 00:09:55.450 [2024-10-15 11:09:35.968925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:d9220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.968951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.969008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c7ffefff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.969022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.969098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.969113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:35.969167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00222210 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:35.969181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.450 #51 NEW cov: 12451 ft: 14877 corp: 27/595b lim: 35 exec/s: 51 rss: 75Mb L: 30/35 MS: 1 ChangeBinInt- 00:09:55.450 [2024-10-15 11:09:36.029296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:36.029322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:36.029379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:36.029393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:36.029447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:36.029460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:36.029515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:24220a22 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:36.029531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.450 [2024-10-15 11:09:36.029585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:22222322 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.450 [2024-10-15 11:09:36.029598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:55.450 #52 NEW cov: 12451 ft: 14884 corp: 28/630b lim: 35 exec/s: 52 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:09:55.710 [2024-10-15 11:09:36.089121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff03ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.089147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.710 [2024-10-15 11:09:36.089219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.089234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.710 [2024-10-15 11:09:36.089290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff1bffff cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.089304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.710 #53 NEW cov: 12451 ft: 14917 corp: 29/657b lim: 35 exec/s: 53 rss: 75Mb L: 27/35 MS: 1 ChangeBinInt- 00:09:55.710 [2024-10-15 11:09:36.149117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.149143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.710 [2024-10-15 11:09:36.149214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00222210 cdw11:23220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.149228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.710 #54 NEW cov: 12451 ft: 14923 corp: 30/674b lim: 35 exec/s: 54 rss: 75Mb L: 17/35 MS: 1 InsertByte- 00:09:55.710 [2024-10-15 11:09:36.189225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22140000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.189250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.710 [2024-10-15 11:09:36.189321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10002222 cdw11:22230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.189335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.710 #55 NEW cov: 12451 ft: 14982 corp: 31/691b lim: 35 exec/s: 55 rss: 75Mb L: 17/35 MS: 1 InsertByte- 00:09:55.710 [2024-10-15 11:09:36.229879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.229904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.710 [2024-10-15 11:09:36.229973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.710 [2024-10-15 11:09:36.229987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.230047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffeffff cdw11:ff400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.230061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.230125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:24220a22 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.230138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.230189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:22222322 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.230203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:55.711 #56 NEW cov: 12451 ft: 14997 corp: 32/726b lim: 35 exec/s: 56 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:09:55.711 [2024-10-15 11:09:36.289364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1616b216 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.289389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.711 #57 NEW cov: 12451 ft: 15688 corp: 33/736b lim: 35 exec/s: 57 rss: 75Mb L: 10/35 MS: 1 EraseBytes- 00:09:55.711 [2024-10-15 11:09:36.329948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:d9220003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.329974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.330032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2222ef23 cdw11:22ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.330062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.330117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.330130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.711 [2024-10-15 11:09:36.330183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff22ffff cdw11:22100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.711 [2024-10-15 11:09:36.330197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.970 #58 NEW cov: 12451 ft: 15704 corp: 34/770b lim: 35 exec/s: 58 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:09:55.970 [2024-10-15 11:09:36.389846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:22100a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.970 [2024-10-15 11:09:36.389871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.970 [2024-10-15 11:09:36.389941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:23220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.970 [2024-10-15 11:09:36.389955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.970 #59 NEW cov: 12451 ft: 15731 corp: 35/786b lim: 35 exec/s: 29 rss: 75Mb L: 16/35 MS: 1 ChangeBinInt- 00:09:55.970 #59 DONE cov: 12451 ft: 15731 corp: 35/786b lim: 35 exec/s: 29 rss: 75Mb 00:09:55.970 Done 59 runs in 2 second(s) 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:55.970 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:55.971 11:09:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:09:55.971 [2024-10-15 11:09:36.568141] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:55.971 [2024-10-15 11:09:36.568210] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715547 ] 00:09:56.230 [2024-10-15 11:09:36.825274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.489 [2024-10-15 11:09:36.880234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.489 [2024-10-15 11:09:36.939436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.489 [2024-10-15 11:09:36.955593] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:09:56.489 INFO: Running with entropic power schedule (0xFF, 100). 00:09:56.489 INFO: Seed: 2266378606 00:09:56.489 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:56.489 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:56.489 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:56.489 INFO: A corpus is not provided, starting from an empty corpus 00:09:56.489 #2 INITED exec/s: 0 rss: 65Mb 00:09:56.489 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:56.489 This may also happen if the target rejected all inputs we tried so far 00:09:56.489 [2024-10-15 11:09:37.001215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.489 [2024-10-15 11:09:37.001245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.489 [2024-10-15 11:09:37.001297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.489 [2024-10-15 11:09:37.001315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:56.748 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:09:56.748 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:56.748 #4 NEW cov: 12230 ft: 12229 corp: 2/26b lim: 45 exec/s: 0 rss: 73Mb L: 25/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:56.748 [2024-10-15 11:09:37.342247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.748 [2024-10-15 11:09:37.342286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.748 [2024-10-15 11:09:37.342340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ae2b5c4c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.748 [2024-10-15 11:09:37.342354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:56.748 [2024-10-15 11:09:37.342404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.748 [2024-10-15 11:09:37.342418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.008 #5 NEW cov: 12348 ft: 13100 corp: 3/59b lim: 45 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\341\277!\\L\256+\000"- 00:09:57.008 [2024-10-15 11:09:37.402502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.402529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.402584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.402598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.402650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.402663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.402715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.402729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.008 #7 NEW cov: 12354 ft: 13658 corp: 4/100b lim: 45 exec/s: 0 rss: 74Mb L: 41/41 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:57.008 [2024-10-15 11:09:37.442577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.442602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.442672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.442686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.442738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.442755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.442808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.442821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.008 #8 NEW cov: 12439 ft: 13838 corp: 5/143b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:09:57.008 [2024-10-15 11:09:37.502585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.502609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.502663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.502677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.502728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.502741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.008 #9 NEW cov: 12439 ft: 13900 corp: 6/172b lim: 45 exec/s: 0 rss: 74Mb L: 29/43 MS: 1 EraseBytes- 00:09:57.008 [2024-10-15 11:09:37.562912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.562936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.563007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.563021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.563079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.563103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.563153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.563166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.008 #10 NEW cov: 12439 ft: 14006 corp: 7/213b lim: 45 exec/s: 0 rss: 74Mb L: 41/43 MS: 1 ChangeByte- 00:09:57.008 [2024-10-15 11:09:37.622759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9090250a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.622785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.008 [2024-10-15 11:09:37.622856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.008 [2024-10-15 11:09:37.622870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 #12 NEW cov: 12439 ft: 14135 corp: 8/232b lim: 45 exec/s: 0 rss: 74Mb L: 19/43 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:57.268 [2024-10-15 11:09:37.663182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.663207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.663277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.663292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.663344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.663357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.663411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.663424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.268 #13 NEW cov: 12439 ft: 14173 corp: 9/274b lim: 45 exec/s: 0 rss: 74Mb L: 42/43 MS: 1 InsertByte- 00:09:57.268 [2024-10-15 11:09:37.703279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.703303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.703359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.703372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.703425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.703438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.703490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.703503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.268 #16 NEW cov: 12439 ft: 14204 corp: 10/317b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:09:57.268 [2024-10-15 11:09:37.743392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.743416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.743487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.743501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.743554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.743568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.743620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.743636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.268 #17 NEW cov: 12439 ft: 14269 corp: 11/360b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 ShuffleBytes- 00:09:57.268 [2024-10-15 11:09:37.783021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9090250a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.783052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 #18 NEW cov: 12439 ft: 14979 corp: 12/374b lim: 45 exec/s: 0 rss: 74Mb L: 14/43 MS: 1 EraseBytes- 00:09:57.268 [2024-10-15 11:09:37.843500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.843526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.843596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ae2b5c4c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.843611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.843665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c4c0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.843679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.268 #19 NEW cov: 12439 ft: 15068 corp: 13/407b lim: 45 exec/s: 0 rss: 74Mb L: 33/43 MS: 1 CopyPart- 00:09:57.268 [2024-10-15 11:09:37.883751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.883776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.883847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.883861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.883916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.883929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.268 [2024-10-15 11:09:37.883983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.268 [2024-10-15 11:09:37.883996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.531 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:57.531 #20 NEW cov: 12462 ft: 15127 corp: 14/450b lim: 45 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 ShuffleBytes- 00:09:57.531 [2024-10-15 11:09:37.923872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a0a cdw11:ff080007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.923897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.923968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.923986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.924050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.924064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.924116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.924130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.531 #21 NEW cov: 12462 ft: 15161 corp: 15/491b lim: 45 exec/s: 0 rss: 74Mb L: 41/43 MS: 1 ChangeBinInt- 00:09:57.531 [2024-10-15 11:09:37.984075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.984101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.984155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00e1f9f9 cdw11:bf210002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.984169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.984220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00002b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.984234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:37.984283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:37.984296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.531 #22 NEW cov: 12462 ft: 15192 corp: 16/530b lim: 45 exec/s: 22 rss: 74Mb L: 39/43 MS: 1 CrossOver- 00:09:57.531 [2024-10-15 11:09:38.024164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.024192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.024248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.024262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.024316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.024330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.024384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.024397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.531 #23 NEW cov: 12462 ft: 15216 corp: 17/573b lim: 45 exec/s: 23 rss: 74Mb L: 43/43 MS: 1 ShuffleBytes- 00:09:57.531 [2024-10-15 11:09:38.064317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.064346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.064417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.064431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.064486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.064499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.064553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.064567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.531 #24 NEW cov: 12462 ft: 15236 corp: 18/616b lim: 45 exec/s: 24 rss: 74Mb L: 43/43 MS: 1 ChangeBit- 00:09:57.531 [2024-10-15 11:09:38.124152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:d090250a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.124178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.531 [2024-10-15 11:09:38.124230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.531 [2024-10-15 11:09:38.124243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.531 #25 NEW cov: 12462 ft: 15304 corp: 19/635b lim: 45 exec/s: 25 rss: 74Mb L: 19/43 MS: 1 ChangeBit- 00:09:57.791 [2024-10-15 11:09:38.164597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00f90000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.164622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.164677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.164690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.164741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.164755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.164805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.164819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.791 #26 NEW cov: 12462 ft: 15324 corp: 20/678b lim: 45 exec/s: 26 rss: 74Mb L: 43/43 MS: 1 ShuffleBytes- 00:09:57.791 [2024-10-15 11:09:38.224417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9090250a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.224444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.224498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90900090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.224515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.791 #27 NEW cov: 12462 ft: 15404 corp: 21/699b lim: 45 exec/s: 27 rss: 74Mb L: 21/43 MS: 1 CMP- DE: "\000\000"- 00:09:57.791 [2024-10-15 11:09:38.264551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.264576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.264628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.264641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.791 #28 NEW cov: 12462 ft: 15420 corp: 22/717b lim: 45 exec/s: 28 rss: 74Mb L: 18/43 MS: 1 EraseBytes- 00:09:57.791 [2024-10-15 11:09:38.304856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.791 [2024-10-15 11:09:38.304881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.791 [2024-10-15 11:09:38.304950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.304964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.305017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:48484848 cdw11:48480002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.305036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.792 #29 NEW cov: 12462 ft: 15457 corp: 23/752b lim: 45 exec/s: 29 rss: 74Mb L: 35/43 MS: 1 InsertRepeatedBytes- 00:09:57.792 [2024-10-15 11:09:38.345151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.345176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.345244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ae2b5c4c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.345258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.345310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.345323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.345373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.345386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.792 #30 NEW cov: 12462 ft: 15479 corp: 24/792b lim: 45 exec/s: 30 rss: 74Mb L: 40/43 MS: 1 CrossOver- 00:09:57.792 [2024-10-15 11:09:38.385056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.385081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.385132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ae2b5c4c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.385148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.792 [2024-10-15 11:09:38.385197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5c4c0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:57.792 [2024-10-15 11:09:38.385210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.052 #31 NEW cov: 12462 ft: 15529 corp: 25/825b lim: 45 exec/s: 31 rss: 74Mb L: 33/43 MS: 1 PersAutoDict- DE: "\000\000"- 00:09:58.052 [2024-10-15 11:09:38.445346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.445371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.445423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.445436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.445505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.445519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.445571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.445584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.052 #32 NEW cov: 12462 ft: 15574 corp: 26/868b lim: 45 exec/s: 32 rss: 74Mb L: 43/43 MS: 1 ChangeByte- 00:09:58.052 [2024-10-15 11:09:38.485165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9090250a cdw11:90900006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.485190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.485258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90900090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.485272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.052 #33 NEW cov: 12462 ft: 15611 corp: 27/889b lim: 45 exec/s: 33 rss: 74Mb L: 21/43 MS: 1 ChangeBit- 00:09:58.052 [2024-10-15 11:09:38.545331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.545355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.545423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.545437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.052 #34 NEW cov: 12462 ft: 15659 corp: 28/914b lim: 45 exec/s: 34 rss: 74Mb L: 25/43 MS: 1 ShuffleBytes- 00:09:58.052 [2024-10-15 11:09:38.585756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.585781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.585836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9e10005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.585850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.585902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2b004cae cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.585916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.052 [2024-10-15 11:09:38.585966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.052 [2024-10-15 11:09:38.585979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.052 #35 NEW cov: 12462 ft: 15663 corp: 29/955b lim: 45 exec/s: 35 rss: 74Mb L: 41/43 MS: 1 EraseBytes- 00:09:58.053 [2024-10-15 11:09:38.625562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002513 cdw11:00900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.625587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.053 [2024-10-15 11:09:38.625658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.625672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.053 #36 NEW cov: 12462 ft: 15717 corp: 30/974b lim: 45 exec/s: 36 rss: 74Mb L: 19/43 MS: 1 ChangeBinInt- 00:09:58.053 [2024-10-15 11:09:38.666010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.666040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.053 [2024-10-15 11:09:38.666108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.666123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.053 [2024-10-15 11:09:38.666174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.666187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.053 [2024-10-15 11:09:38.666237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f9f900f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.053 [2024-10-15 11:09:38.666250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.312 #37 NEW cov: 12462 ft: 15756 corp: 31/1017b lim: 45 exec/s: 37 rss: 74Mb L: 43/43 MS: 1 CopyPart- 00:09:58.312 [2024-10-15 11:09:38.705829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.705854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.705924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.705938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.312 #38 NEW cov: 12462 ft: 15762 corp: 32/1035b lim: 45 exec/s: 38 rss: 75Mb L: 18/43 MS: 1 CopyPart- 00:09:58.312 [2024-10-15 11:09:38.766307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.766331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.766384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.766398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.766450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.766463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.766513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.766526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.312 #39 NEW cov: 12462 ft: 15796 corp: 33/1078b lim: 45 exec/s: 39 rss: 75Mb L: 43/43 MS: 1 ChangeBit- 00:09:58.312 [2024-10-15 11:09:38.806406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.806431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.806498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.806512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.806563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.806576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.806627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f9f900f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.806640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.312 #40 NEW cov: 12462 ft: 15870 corp: 34/1122b lim: 45 exec/s: 40 rss: 75Mb L: 44/44 MS: 1 InsertByte- 00:09:58.312 [2024-10-15 11:09:38.866564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00f90000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.866588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.866655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.866670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.866720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4cae215c cdw11:2b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.866734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.866789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.866803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.312 #41 NEW cov: 12462 ft: 15912 corp: 35/1165b lim: 45 exec/s: 41 rss: 75Mb L: 43/44 MS: 1 ShuffleBytes- 00:09:58.312 [2024-10-15 11:09:38.926575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00f90000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.926599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.926669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.926683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.312 [2024-10-15 11:09:38.926734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.312 [2024-10-15 11:09:38.926748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.572 #42 NEW cov: 12462 ft: 15920 corp: 36/1193b lim: 45 exec/s: 42 rss: 75Mb L: 28/44 MS: 1 EraseBytes- 00:09:58.572 [2024-10-15 11:09:38.966514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:9090250a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.572 [2024-10-15 11:09:38.966538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.572 [2024-10-15 11:09:38.966607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:6f6fff6f cdw11:6f6f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.572 [2024-10-15 11:09:38.966621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.572 #43 NEW cov: 12462 ft: 15925 corp: 37/1214b lim: 45 exec/s: 21 rss: 75Mb L: 21/44 MS: 1 ChangeBinInt- 00:09:58.572 #43 DONE cov: 12462 ft: 15925 corp: 37/1214b lim: 45 exec/s: 21 rss: 75Mb 00:09:58.572 ###### Recommended dictionary. ###### 00:09:58.572 "\341\277!\\L\256+\000" # Uses: 0 00:09:58.572 "\000\000" # Uses: 1 00:09:58.572 ###### End of recommended dictionary. ###### 00:09:58.572 Done 43 runs in 2 second(s) 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:58.572 11:09:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:09:58.572 [2024-10-15 11:09:39.136482] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:09:58.572 [2024-10-15 11:09:39.136555] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715869 ] 00:09:58.831 [2024-10-15 11:09:39.325740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.831 [2024-10-15 11:09:39.364779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.832 [2024-10-15 11:09:39.424048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.832 [2024-10-15 11:09:39.440198] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:09:58.832 INFO: Running with entropic power schedule (0xFF, 100). 00:09:58.832 INFO: Seed: 457407139 00:09:59.090 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:09:59.090 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:09:59.090 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:59.091 INFO: A corpus is not provided, starting from an empty corpus 00:09:59.091 #2 INITED exec/s: 0 rss: 66Mb 00:09:59.091 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:59.091 This may also happen if the target rejected all inputs we tried so far 00:09:59.091 [2024-10-15 11:09:39.495736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:09:59.091 [2024-10-15 11:09:39.495767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.349 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:09:59.349 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:59.349 #3 NEW cov: 12152 ft: 12141 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:09:59.349 [2024-10-15 11:09:39.836719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.836756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.349 [2024-10-15 11:09:39.836811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.836825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.349 #5 NEW cov: 12265 ft: 12932 corp: 3/8b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 2 ChangeBit-CMP- DE: "\000\000\000\001"- 00:09:59.349 [2024-10-15 11:09:39.876567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ad7 cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.876596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.349 #6 NEW cov: 12271 ft: 13228 corp: 4/10b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:09:59.349 [2024-10-15 11:09:39.936745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.936771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.349 #7 NEW cov: 12356 ft: 13644 corp: 5/12b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:09:59.349 [2024-10-15 11:09:39.977116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.977142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.349 [2024-10-15 11:09:39.977198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003232 cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.977213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.349 [2024-10-15 11:09:39.977268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003232 cdw11:00000000 00:09:59.349 [2024-10-15 11:09:39.977282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.608 #8 NEW cov: 12356 ft: 13918 corp: 6/19b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:09:59.609 [2024-10-15 11:09:40.017157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.017185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.017243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.017257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.609 #9 NEW cov: 12356 ft: 14097 corp: 7/24b lim: 10 exec/s: 0 rss: 74Mb L: 5/7 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:09:59.609 [2024-10-15 11:09:40.077492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.077527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.077599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003232 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.077614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.077671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003232 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.077685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.609 #10 NEW cov: 12356 ft: 14156 corp: 8/31b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeASCIIInt- 00:09:59.609 [2024-10-15 11:09:40.137493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.137522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.137581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.137596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.609 #11 NEW cov: 12356 ft: 14197 corp: 9/36b lim: 10 exec/s: 0 rss: 74Mb L: 5/7 MS: 1 ShuffleBytes- 00:09:59.609 [2024-10-15 11:09:40.177485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b2b cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.177516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.609 #15 NEW cov: 12356 ft: 14216 corp: 10/38b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 4 ChangeBit-ChangeByte-CrossOver-CopyPart- 00:09:59.609 [2024-10-15 11:09:40.217830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b00 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.217856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.217911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.217925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.609 [2024-10-15 11:09:40.217977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000014a cdw11:00000000 00:09:59.609 [2024-10-15 11:09:40.217991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.868 #16 NEW cov: 12356 ft: 14263 corp: 11/44b lim: 10 exec/s: 0 rss: 74Mb L: 6/7 MS: 1 InsertByte- 00:09:59.868 [2024-10-15 11:09:40.257679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.257705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.868 #17 NEW cov: 12356 ft: 14327 corp: 12/47b lim: 10 exec/s: 0 rss: 74Mb L: 3/7 MS: 1 EraseBytes- 00:09:59.868 [2024-10-15 11:09:40.318133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.318162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.868 [2024-10-15 11:09:40.318220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003232 cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.318237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.868 [2024-10-15 11:09:40.318294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000323a cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.318311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.868 #18 NEW cov: 12356 ft: 14343 corp: 13/54b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeBit- 00:09:59.868 [2024-10-15 11:09:40.357959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.357985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.868 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:59.868 #19 NEW cov: 12379 ft: 14431 corp: 14/56b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 CMP- DE: "\000\000"- 00:09:59.868 [2024-10-15 11:09:40.418143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.418171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.868 #20 NEW cov: 12379 ft: 14484 corp: 15/58b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 EraseBytes- 00:09:59.868 [2024-10-15 11:09:40.478282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ad7 cdw11:00000000 00:09:59.868 [2024-10-15 11:09:40.478310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.126 #21 NEW cov: 12379 ft: 14532 corp: 16/60b lim: 10 exec/s: 21 rss: 74Mb L: 2/7 MS: 1 ShuffleBytes- 00:10:00.126 [2024-10-15 11:09:40.518433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.518461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.126 #22 NEW cov: 12379 ft: 14554 corp: 17/62b lim: 10 exec/s: 22 rss: 74Mb L: 2/7 MS: 1 CopyPart- 00:10:00.126 [2024-10-15 11:09:40.579009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.579043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.126 [2024-10-15 11:09:40.579100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.579115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.126 [2024-10-15 11:09:40.579170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.579184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.126 [2024-10-15 11:09:40.579237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004a01 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.579251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:00.126 #23 NEW cov: 12379 ft: 14784 corp: 18/71b lim: 10 exec/s: 23 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:10:00.126 [2024-10-15 11:09:40.638773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 00:10:00.126 [2024-10-15 11:09:40.638799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.127 #24 NEW cov: 12379 ft: 14800 corp: 19/74b lim: 10 exec/s: 24 rss: 74Mb L: 3/9 MS: 1 InsertByte- 00:10:00.127 [2024-10-15 11:09:40.699003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.127 [2024-10-15 11:09:40.699035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.127 [2024-10-15 11:09:40.699108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.127 [2024-10-15 11:09:40.699139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.127 #25 NEW cov: 12379 ft: 14826 corp: 20/79b lim: 10 exec/s: 25 rss: 74Mb L: 5/9 MS: 1 ShuffleBytes- 00:10:00.127 [2024-10-15 11:09:40.739023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000add cdw11:00000000 00:10:00.127 [2024-10-15 11:09:40.739070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.385 #26 NEW cov: 12379 ft: 14831 corp: 21/81b lim: 10 exec/s: 26 rss: 74Mb L: 2/9 MS: 1 CopyPart- 00:10:00.385 [2024-10-15 11:09:40.799181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 00:10:00.385 [2024-10-15 11:09:40.799206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.385 #27 NEW cov: 12379 ft: 14880 corp: 22/83b lim: 10 exec/s: 27 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:10:00.385 [2024-10-15 11:09:40.839307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 00:10:00.385 [2024-10-15 11:09:40.839333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.385 #28 NEW cov: 12379 ft: 14887 corp: 23/86b lim: 10 exec/s: 28 rss: 74Mb L: 3/9 MS: 1 ChangeBinInt- 00:10:00.385 [2024-10-15 11:09:40.899584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 00:10:00.385 [2024-10-15 11:09:40.899612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.385 [2024-10-15 11:09:40.899668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.385 [2024-10-15 11:09:40.899683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.385 #29 NEW cov: 12379 ft: 14915 corp: 24/90b lim: 10 exec/s: 29 rss: 75Mb L: 4/9 MS: 1 PersAutoDict- DE: "\000\000"- 00:10:00.385 [2024-10-15 11:09:40.959604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.385 [2024-10-15 11:09:40.959630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.385 #30 NEW cov: 12379 ft: 14951 corp: 25/92b lim: 10 exec/s: 30 rss: 75Mb L: 2/9 MS: 1 CrossOver- 00:10:00.643 [2024-10-15 11:09:41.020090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b3a cdw11:00000000 00:10:00.643 [2024-10-15 11:09:41.020117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.643 [2024-10-15 11:09:41.020174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.643 [2024-10-15 11:09:41.020187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.644 [2024-10-15 11:09:41.020243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000014a cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.020257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.644 #31 NEW cov: 12379 ft: 14985 corp: 26/98b lim: 10 exec/s: 31 rss: 75Mb L: 6/9 MS: 1 ChangeByte- 00:10:00.644 [2024-10-15 11:09:41.079965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.079991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.644 #32 NEW cov: 12379 ft: 14997 corp: 27/100b lim: 10 exec/s: 32 rss: 75Mb L: 2/9 MS: 1 CopyPart- 00:10:00.644 [2024-10-15 11:09:41.120061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.120087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.644 #33 NEW cov: 12379 ft: 15030 corp: 28/103b lim: 10 exec/s: 33 rss: 75Mb L: 3/9 MS: 1 CopyPart- 00:10:00.644 [2024-10-15 11:09:41.160203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.160229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.644 #34 NEW cov: 12379 ft: 15039 corp: 29/106b lim: 10 exec/s: 34 rss: 75Mb L: 3/9 MS: 1 CopyPart- 00:10:00.644 [2024-10-15 11:09:41.220500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.220525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.644 [2024-10-15 11:09:41.220598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002800 cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.220613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.644 #35 NEW cov: 12379 ft: 15041 corp: 30/110b lim: 10 exec/s: 35 rss: 75Mb L: 4/9 MS: 1 InsertByte- 00:10:00.644 [2024-10-15 11:09:41.260739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002bff cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.260767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.644 [2024-10-15 11:09:41.260838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.260853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.644 [2024-10-15 11:09:41.260905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000014a cdw11:00000000 00:10:00.644 [2024-10-15 11:09:41.260919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.903 #36 NEW cov: 12379 ft: 15055 corp: 31/116b lim: 10 exec/s: 36 rss: 75Mb L: 6/9 MS: 1 ChangeBinInt- 00:10:00.903 [2024-10-15 11:09:41.300611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.903 [2024-10-15 11:09:41.300637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.903 #37 NEW cov: 12379 ft: 15068 corp: 32/118b lim: 10 exec/s: 37 rss: 75Mb L: 2/9 MS: 1 ShuffleBytes- 00:10:00.903 [2024-10-15 11:09:41.340729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:10:00.903 [2024-10-15 11:09:41.340754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.903 #38 NEW cov: 12379 ft: 15078 corp: 33/120b lim: 10 exec/s: 38 rss: 75Mb L: 2/9 MS: 1 ChangeByte- 00:10:00.903 [2024-10-15 11:09:41.400919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000005f9 cdw11:00000000 00:10:00.903 [2024-10-15 11:09:41.400945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.903 #39 NEW cov: 12379 ft: 15079 corp: 34/122b lim: 10 exec/s: 39 rss: 75Mb L: 2/9 MS: 1 EraseBytes- 00:10:00.903 [2024-10-15 11:09:41.461108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:00.903 [2024-10-15 11:09:41.461133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.903 #40 NEW cov: 12379 ft: 15106 corp: 35/125b lim: 10 exec/s: 20 rss: 75Mb L: 3/9 MS: 1 CopyPart- 00:10:00.903 #40 DONE cov: 12379 ft: 15106 corp: 35/125b lim: 10 exec/s: 20 rss: 75Mb 00:10:00.903 ###### Recommended dictionary. ###### 00:10:00.903 "\000\000\000\001" # Uses: 1 00:10:00.903 "\000\000" # Uses: 1 00:10:00.903 ###### End of recommended dictionary. ###### 00:10:00.903 Done 40 runs in 2 second(s) 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:01.162 11:09:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:10:01.162 [2024-10-15 11:09:41.628638] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:01.162 [2024-10-15 11:09:41.628708] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716219 ] 00:10:01.421 [2024-10-15 11:09:41.809634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.421 [2024-10-15 11:09:41.850802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.421 [2024-10-15 11:09:41.910116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.421 [2024-10-15 11:09:41.926294] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:10:01.421 INFO: Running with entropic power schedule (0xFF, 100). 00:10:01.421 INFO: Seed: 2943418339 00:10:01.421 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:01.421 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:01.421 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:10:01.421 INFO: A corpus is not provided, starting from an empty corpus 00:10:01.421 #2 INITED exec/s: 0 rss: 66Mb 00:10:01.421 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:01.421 This may also happen if the target rejected all inputs we tried so far 00:10:01.421 [2024-10-15 11:09:42.003542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a76 cdw11:00000000 00:10:01.421 [2024-10-15 11:09:42.003579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:10:01.989 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:01.989 #3 NEW cov: 12134 ft: 12129 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:10:01.989 [2024-10-15 11:09:42.355182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.355227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.355326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.355342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.355429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.355445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.355534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.355552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:01.989 #5 NEW cov: 12264 ft: 13109 corp: 3/11b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 EraseBytes-InsertRepeatedBytes- 00:10:01.989 [2024-10-15 11:09:42.424535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003076 cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.424563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 #6 NEW cov: 12270 ft: 13310 corp: 4/13b lim: 10 exec/s: 0 rss: 73Mb L: 2/8 MS: 1 ChangeByte- 00:10:01.989 [2024-10-15 11:09:42.474784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000305c cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.474811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 #7 NEW cov: 12355 ft: 13594 corp: 5/15b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 1 ChangeByte- 00:10:01.989 [2024-10-15 11:09:42.545998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.546024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.546118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a76 cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.546133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.546217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.546235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.546321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.546337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:01.989 #8 NEW cov: 12355 ft: 13649 corp: 6/23b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:10:01.989 [2024-10-15 11:09:42.616164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.616190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.616277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a76 cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.616293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.616381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.616398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:01.989 [2024-10-15 11:09:42.616482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:10:01.989 [2024-10-15 11:09:42.616497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.247 #9 NEW cov: 12355 ft: 13703 corp: 7/32b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertByte- 00:10:02.247 [2024-10-15 11:09:42.685771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a2a cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.685800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.247 #12 NEW cov: 12355 ft: 13743 corp: 8/34b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 3 ChangeByte-ChangeBit-CopyPart- 00:10:02.247 [2024-10-15 11:09:42.736771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.736798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.247 [2024-10-15 11:09:42.736884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.736900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.247 [2024-10-15 11:09:42.736990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.737007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.247 [2024-10-15 11:09:42.737104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.737120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.247 #13 NEW cov: 12355 ft: 13814 corp: 9/42b lim: 10 exec/s: 0 rss: 74Mb L: 8/9 MS: 1 ChangeBinInt- 00:10:02.247 [2024-10-15 11:09:42.786190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000760a cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.786217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.247 #14 NEW cov: 12355 ft: 13888 corp: 10/44b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:10:02.247 [2024-10-15 11:09:42.836963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.247 [2024-10-15 11:09:42.836989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.247 [2024-10-15 11:09:42.837087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.248 [2024-10-15 11:09:42.837105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.248 [2024-10-15 11:09:42.837193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.248 [2024-10-15 11:09:42.837209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.248 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:02.248 #15 NEW cov: 12378 ft: 14082 corp: 11/50b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 EraseBytes- 00:10:02.507 [2024-10-15 11:09:42.887397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.887423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.887516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a00 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.887534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.887626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.887642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.887741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.887763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.507 #16 NEW cov: 12378 ft: 14107 corp: 12/58b lim: 10 exec/s: 0 rss: 74Mb L: 8/9 MS: 1 ChangeByte- 00:10:02.507 [2024-10-15 11:09:42.957673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.957699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.957790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff76 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.957806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.957898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.957915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:42.957998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:02.507 [2024-10-15 11:09:42.958013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.507 #17 NEW cov: 12378 ft: 14118 corp: 13/66b lim: 10 exec/s: 17 rss: 74Mb L: 8/9 MS: 1 ShuffleBytes- 00:10:02.507 [2024-10-15 11:09:43.007914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.007940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.008033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a00 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.008051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.008133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002d00 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.008149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.008239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.008255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.507 #18 NEW cov: 12378 ft: 14124 corp: 14/74b lim: 10 exec/s: 18 rss: 74Mb L: 8/9 MS: 1 ChangeByte- 00:10:02.507 [2024-10-15 11:09:43.078167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.078193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.078279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff75 cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.078296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.078387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.078403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.507 [2024-10-15 11:09:43.078490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:02.507 [2024-10-15 11:09:43.078509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.507 #19 NEW cov: 12378 ft: 14143 corp: 15/82b lim: 10 exec/s: 19 rss: 74Mb L: 8/9 MS: 1 ChangeBinInt- 00:10:02.766 [2024-10-15 11:09:43.148347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.148376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.148472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff76 cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.148491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.148578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.148596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.148690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.148707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.766 #20 NEW cov: 12378 ft: 14159 corp: 16/91b lim: 10 exec/s: 20 rss: 74Mb L: 9/9 MS: 1 InsertByte- 00:10:02.766 [2024-10-15 11:09:43.198890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.198919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.199002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a00 cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.199018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.199110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002d3a cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.199126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.199218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.199237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.199315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.199333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:02.766 #21 NEW cov: 12378 ft: 14246 corp: 17/101b lim: 10 exec/s: 21 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:10:02.766 [2024-10-15 11:09:43.269219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.269248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.269344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.269360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.269447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.269464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.269554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.269573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.766 [2024-10-15 11:09:43.269654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a2a cdw11:00000000 00:10:02.766 [2024-10-15 11:09:43.269672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:02.767 #22 NEW cov: 12378 ft: 14290 corp: 18/111b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:10:02.767 [2024-10-15 11:09:43.339560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.767 [2024-10-15 11:09:43.339588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.767 [2024-10-15 11:09:43.339678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a76 cdw11:00000000 00:10:02.767 [2024-10-15 11:09:43.339697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.767 [2024-10-15 11:09:43.339785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:02.767 [2024-10-15 11:09:43.339803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.767 [2024-10-15 11:09:43.339889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:10:02.767 [2024-10-15 11:09:43.339906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.767 [2024-10-15 11:09:43.339993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a41 cdw11:00000000 00:10:02.767 [2024-10-15 11:09:43.340012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:02.767 #23 NEW cov: 12378 ft: 14329 corp: 19/121b lim: 10 exec/s: 23 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:10:03.026 [2024-10-15 11:09:43.419592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.419628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.419722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a76 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.419740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.419827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.419846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.419936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.419953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.026 #24 NEW cov: 12378 ft: 14382 corp: 20/129b lim: 10 exec/s: 24 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:10:03.026 [2024-10-15 11:09:43.469620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.469649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.469731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.469749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.469832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.469848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.469936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.469952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.026 #25 NEW cov: 12378 ft: 14385 corp: 21/137b lim: 10 exec/s: 25 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:10:03.026 [2024-10-15 11:09:43.518994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000305c cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.519020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.026 #26 NEW cov: 12378 ft: 14391 corp: 22/139b lim: 10 exec/s: 26 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:10:03.026 [2024-10-15 11:09:43.590336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000690a cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.590363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.590457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.590475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.590562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000076ff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.590579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.590666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.590683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.026 #27 NEW cov: 12378 ft: 14404 corp: 23/148b lim: 10 exec/s: 27 rss: 74Mb L: 9/10 MS: 1 InsertByte- 00:10:03.026 [2024-10-15 11:09:43.640491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fa00 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.640516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.640607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff75 cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.640625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.640710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.640729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.026 [2024-10-15 11:09:43.640812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.026 [2024-10-15 11:09:43.640827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.285 #28 NEW cov: 12378 ft: 14417 corp: 24/156b lim: 10 exec/s: 28 rss: 74Mb L: 8/10 MS: 1 ChangeBinInt- 00:10:03.285 [2024-10-15 11:09:43.710988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.711014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.711108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.711125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.711208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000076 cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.711223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.711308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.711325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.711410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.711425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:03.285 #29 NEW cov: 12378 ft: 14421 corp: 25/166b lim: 10 exec/s: 29 rss: 75Mb L: 10/10 MS: 1 CrossOver- 00:10:03.285 [2024-10-15 11:09:43.760105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003000 cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.760132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.285 #30 NEW cov: 12378 ft: 14444 corp: 26/168b lim: 10 exec/s: 30 rss: 75Mb L: 2/10 MS: 1 ChangeByte- 00:10:03.285 [2024-10-15 11:09:43.811090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.811116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.811200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff75 cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.811217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.811298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.811314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.811398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f70a cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.811414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.285 #31 NEW cov: 12378 ft: 14452 corp: 27/176b lim: 10 exec/s: 31 rss: 75Mb L: 8/10 MS: 1 ChangeBinInt- 00:10:03.285 [2024-10-15 11:09:43.861327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.861353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.861436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.861452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.861547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000076ff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.861567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.285 [2024-10-15 11:09:43.861648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:10:03.285 [2024-10-15 11:09:43.861664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.285 #32 NEW cov: 12378 ft: 14480 corp: 28/185b lim: 10 exec/s: 32 rss: 75Mb L: 9/10 MS: 1 CrossOver- 00:10:03.544 [2024-10-15 11:09:43.931715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.931742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.931839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003a00 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.931855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.931939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.931956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.932042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.932058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.544 #33 NEW cov: 12378 ft: 14491 corp: 29/194b lim: 10 exec/s: 33 rss: 75Mb L: 9/10 MS: 1 InsertByte- 00:10:03.544 [2024-10-15 11:09:43.981896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fa00 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.981922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.982023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000140 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.982044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.982130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.982147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.544 [2024-10-15 11:09:43.982226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:10:03.544 [2024-10-15 11:09:43.982243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.544 #34 NEW cov: 12378 ft: 14519 corp: 30/202b lim: 10 exec/s: 17 rss: 75Mb L: 8/10 MS: 1 CMP- DE: "\001@\000\000"- 00:10:03.544 #34 DONE cov: 12378 ft: 14519 corp: 30/202b lim: 10 exec/s: 17 rss: 75Mb 00:10:03.544 ###### Recommended dictionary. ###### 00:10:03.544 "\001@\000\000" # Uses: 0 00:10:03.544 ###### End of recommended dictionary. ###### 00:10:03.544 Done 34 runs in 2 second(s) 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:03.544 11:09:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:10:03.544 [2024-10-15 11:09:44.164945] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:03.544 [2024-10-15 11:09:44.165014] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716572 ] 00:10:03.803 [2024-10-15 11:09:44.355073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.803 [2024-10-15 11:09:44.396397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.063 [2024-10-15 11:09:44.455702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.063 [2024-10-15 11:09:44.471833] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:10:04.063 INFO: Running with entropic power schedule (0xFF, 100). 00:10:04.063 INFO: Seed: 1193453450 00:10:04.063 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:04.063 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:04.063 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:10:04.063 INFO: A corpus is not provided, starting from an empty corpus 00:10:04.063 [2024-10-15 11:09:44.539236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.063 [2024-10-15 11:09:44.539274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.063 #2 INITED cov: 12177 ft: 12172 corp: 1/1b exec/s: 0 rss: 72Mb 00:10:04.063 [2024-10-15 11:09:44.589605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.063 [2024-10-15 11:09:44.589635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.322 NEW_FUNC[1/1]: 0x1019f58 in posix_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1628 00:10:04.322 #3 NEW cov: 12293 ft: 12740 corp: 2/2b lim: 5 exec/s: 0 rss: 74Mb L: 1/1 MS: 1 ChangeBinInt- 00:10:04.322 [2024-10-15 11:09:44.941001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.322 [2024-10-15 11:09:44.941046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.322 [2024-10-15 11:09:44.941139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.322 [2024-10-15 11:09:44.941156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.581 #4 NEW cov: 12299 ft: 13755 corp: 3/4b lim: 5 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 InsertByte- 00:10:04.581 [2024-10-15 11:09:44.991303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.581 [2024-10-15 11:09:44.991331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.581 [2024-10-15 11:09:44.991427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.581 [2024-10-15 11:09:44.991444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.581 #5 NEW cov: 12384 ft: 14002 corp: 4/6b lim: 5 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 ChangeBinInt- 00:10:04.581 [2024-10-15 11:09:45.062395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.581 [2024-10-15 11:09:45.062426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.062542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.062558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.062650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.062668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.062764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.062782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:04.582 #6 NEW cov: 12384 ft: 14422 corp: 5/10b lim: 5 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:10:04.582 [2024-10-15 11:09:45.112048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.112074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.112160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.112177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.582 #7 NEW cov: 12384 ft: 14491 corp: 6/12b lim: 5 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:10:04.582 [2024-10-15 11:09:45.162610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.162636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.162727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.162747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.582 [2024-10-15 11:09:45.162838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.582 [2024-10-15 11:09:45.162855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:04.582 #8 NEW cov: 12384 ft: 14752 corp: 7/15b lim: 5 exec/s: 0 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:10:04.840 [2024-10-15 11:09:45.212034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.840 [2024-10-15 11:09:45.212063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.840 #9 NEW cov: 12384 ft: 14818 corp: 8/16b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 ShuffleBytes- 00:10:04.841 [2024-10-15 11:09:45.282209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.841 [2024-10-15 11:09:45.282235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.841 #10 NEW cov: 12384 ft: 14831 corp: 9/17b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 ChangeByte- 00:10:04.841 [2024-10-15 11:09:45.332441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.841 [2024-10-15 11:09:45.332468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.841 #11 NEW cov: 12384 ft: 14865 corp: 10/18b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 CopyPart- 00:10:04.841 [2024-10-15 11:09:45.403006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.841 [2024-10-15 11:09:45.403035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.841 [2024-10-15 11:09:45.403127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.841 [2024-10-15 11:09:45.403142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.841 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:04.841 #12 NEW cov: 12407 ft: 14939 corp: 11/20b lim: 5 exec/s: 0 rss: 74Mb L: 2/4 MS: 1 ShuffleBytes- 00:10:04.841 [2024-10-15 11:09:45.452799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:04.841 [2024-10-15 11:09:45.452828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.099 #13 NEW cov: 12407 ft: 14956 corp: 12/21b lim: 5 exec/s: 0 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:10:05.099 [2024-10-15 11:09:45.503002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.503032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.099 #14 NEW cov: 12407 ft: 14980 corp: 13/22b lim: 5 exec/s: 14 rss: 74Mb L: 1/4 MS: 1 ChangeBit- 00:10:05.099 [2024-10-15 11:09:45.553091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.553122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.099 #15 NEW cov: 12407 ft: 14987 corp: 14/23b lim: 5 exec/s: 15 rss: 74Mb L: 1/4 MS: 1 CrossOver- 00:10:05.099 [2024-10-15 11:09:45.603486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.603515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.099 #16 NEW cov: 12407 ft: 14999 corp: 15/24b lim: 5 exec/s: 16 rss: 74Mb L: 1/4 MS: 1 ChangeBinInt- 00:10:05.099 [2024-10-15 11:09:45.674693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.674720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.099 [2024-10-15 11:09:45.674803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.674820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.099 [2024-10-15 11:09:45.674900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.099 [2024-10-15 11:09:45.674916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.099 [2024-10-15 11:09:45.675010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.100 [2024-10-15 11:09:45.675029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:05.100 #17 NEW cov: 12407 ft: 15137 corp: 16/28b lim: 5 exec/s: 17 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:10:05.358 [2024-10-15 11:09:45.744289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.358 [2024-10-15 11:09:45.744317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.359 [2024-10-15 11:09:45.744418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.744434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.359 #18 NEW cov: 12407 ft: 15148 corp: 17/30b lim: 5 exec/s: 18 rss: 74Mb L: 2/4 MS: 1 ChangeBit- 00:10:05.359 [2024-10-15 11:09:45.814552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.814579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.359 [2024-10-15 11:09:45.814665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.814683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.359 #19 NEW cov: 12407 ft: 15158 corp: 18/32b lim: 5 exec/s: 19 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:10:05.359 [2024-10-15 11:09:45.884816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.884843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.359 [2024-10-15 11:09:45.884946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.884965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.359 #20 NEW cov: 12407 ft: 15197 corp: 19/34b lim: 5 exec/s: 20 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:10:05.359 [2024-10-15 11:09:45.954728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.359 [2024-10-15 11:09:45.954757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.617 #21 NEW cov: 12407 ft: 15212 corp: 20/35b lim: 5 exec/s: 21 rss: 74Mb L: 1/4 MS: 1 ChangeByte- 00:10:05.617 [2024-10-15 11:09:46.025639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.617 [2024-10-15 11:09:46.025667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.025765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.025782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.025872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.025890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.618 #22 NEW cov: 12407 ft: 15227 corp: 21/38b lim: 5 exec/s: 22 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:10:05.618 [2024-10-15 11:09:46.095944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.095971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.096063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.096079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.096173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.096189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.618 #23 NEW cov: 12407 ft: 15237 corp: 22/41b lim: 5 exec/s: 23 rss: 75Mb L: 3/4 MS: 1 CopyPart- 00:10:05.618 [2024-10-15 11:09:46.165424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.165450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.618 #24 NEW cov: 12407 ft: 15247 corp: 23/42b lim: 5 exec/s: 24 rss: 75Mb L: 1/4 MS: 1 CopyPart- 00:10:05.618 [2024-10-15 11:09:46.236417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.236444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.236539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.236557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.618 [2024-10-15 11:09:46.236644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.618 [2024-10-15 11:09:46.236662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.876 #25 NEW cov: 12407 ft: 15259 corp: 24/45b lim: 5 exec/s: 25 rss: 75Mb L: 3/4 MS: 1 CrossOver- 00:10:05.876 [2024-10-15 11:09:46.286517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.286544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.876 [2024-10-15 11:09:46.286647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.286663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.876 [2024-10-15 11:09:46.286749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.286765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.876 #26 NEW cov: 12407 ft: 15307 corp: 25/48b lim: 5 exec/s: 26 rss: 75Mb L: 3/4 MS: 1 ChangeBinInt- 00:10:05.876 [2024-10-15 11:09:46.356089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.356116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.876 #27 NEW cov: 12407 ft: 15325 corp: 26/49b lim: 5 exec/s: 27 rss: 75Mb L: 1/4 MS: 1 ChangeBit- 00:10:05.876 [2024-10-15 11:09:46.406319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.406346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.876 #28 NEW cov: 12407 ft: 15354 corp: 27/50b lim: 5 exec/s: 28 rss: 75Mb L: 1/4 MS: 1 ChangeBit- 00:10:05.876 [2024-10-15 11:09:46.456772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.456798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.876 [2024-10-15 11:09:46.456883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:05.876 [2024-10-15 11:09:46.456899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.876 #29 NEW cov: 12407 ft: 15359 corp: 28/52b lim: 5 exec/s: 29 rss: 75Mb L: 2/4 MS: 1 ChangeByte- 00:10:06.136 [2024-10-15 11:09:46.527032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.136 [2024-10-15 11:09:46.527058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.136 [2024-10-15 11:09:46.527151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.136 [2024-10-15 11:09:46.527171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.136 #30 NEW cov: 12407 ft: 15385 corp: 29/54b lim: 5 exec/s: 15 rss: 75Mb L: 2/4 MS: 1 ChangeBit- 00:10:06.136 #30 DONE cov: 12407 ft: 15385 corp: 29/54b lim: 5 exec/s: 15 rss: 75Mb 00:10:06.136 Done 30 runs in 2 second(s) 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:06.136 11:09:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:10:06.136 [2024-10-15 11:09:46.690677] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:06.136 [2024-10-15 11:09:46.690756] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716938 ] 00:10:06.395 [2024-10-15 11:09:46.866557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.395 [2024-10-15 11:09:46.904603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.395 [2024-10-15 11:09:46.963569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.395 [2024-10-15 11:09:46.979734] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:10:06.395 INFO: Running with entropic power schedule (0xFF, 100). 00:10:06.395 INFO: Seed: 3701440329 00:10:06.395 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:06.395 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:06.395 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:10:06.395 INFO: A corpus is not provided, starting from an empty corpus 00:10:06.395 [2024-10-15 11:09:47.024695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.395 [2024-10-15 11:09:47.024737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.654 #2 INITED cov: 12168 ft: 12173 corp: 1/1b exec/s: 0 rss: 72Mb 00:10:06.654 [2024-10-15 11:09:47.074665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.654 [2024-10-15 11:09:47.074697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.654 #3 NEW cov: 12293 ft: 12781 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:10:06.654 [2024-10-15 11:09:47.164905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.654 [2024-10-15 11:09:47.164938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.654 #4 NEW cov: 12299 ft: 12933 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBinInt- 00:10:06.654 [2024-10-15 11:09:47.214961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.654 [2024-10-15 11:09:47.214992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.913 #5 NEW cov: 12384 ft: 13254 corp: 4/4b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:10:06.913 [2024-10-15 11:09:47.305241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.305273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.913 #6 NEW cov: 12384 ft: 13439 corp: 5/5b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 CopyPart- 00:10:06.913 [2024-10-15 11:09:47.365640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.365671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.365720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.365737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.365766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.365782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.365812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.365828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.365857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.365873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:06.913 #7 NEW cov: 12384 ft: 14322 corp: 6/10b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:10:06.913 [2024-10-15 11:09:47.425604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.425639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.425688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.425705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.913 #8 NEW cov: 12384 ft: 14619 corp: 7/12b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:10:06.913 [2024-10-15 11:09:47.515811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.515841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.913 [2024-10-15 11:09:47.515890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.913 [2024-10-15 11:09:47.515906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.172 #9 NEW cov: 12384 ft: 14697 corp: 8/14b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:10:07.172 [2024-10-15 11:09:47.606018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.172 [2024-10-15 11:09:47.606058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.172 #10 NEW cov: 12384 ft: 14720 corp: 9/15b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:10:07.172 [2024-10-15 11:09:47.666185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.172 [2024-10-15 11:09:47.666217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.172 #11 NEW cov: 12384 ft: 14810 corp: 10/16b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:10:07.172 [2024-10-15 11:09:47.756416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.172 [2024-10-15 11:09:47.756448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.172 #12 NEW cov: 12384 ft: 14860 corp: 11/17b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:10:07.431 [2024-10-15 11:09:47.816577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.431 [2024-10-15 11:09:47.816608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.431 #13 NEW cov: 12384 ft: 14924 corp: 12/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:10:07.431 [2024-10-15 11:09:47.866788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.431 [2024-10-15 11:09:47.866821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.431 [2024-10-15 11:09:47.866855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.431 [2024-10-15 11:09:47.866871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.690 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:07.690 #14 NEW cov: 12407 ft: 14958 corp: 13/20b lim: 5 exec/s: 14 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:10:07.690 [2024-10-15 11:09:48.187655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.187698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.690 #15 NEW cov: 12407 ft: 15031 corp: 14/21b lim: 5 exec/s: 15 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:10:07.690 [2024-10-15 11:09:48.278024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.278064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.690 [2024-10-15 11:09:48.278113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.278130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.690 [2024-10-15 11:09:48.278159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.278175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.690 [2024-10-15 11:09:48.278204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.278220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:07.690 [2024-10-15 11:09:48.278249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.690 [2024-10-15 11:09:48.278265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:07.949 #16 NEW cov: 12407 ft: 15072 corp: 15/26b lim: 5 exec/s: 16 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:10:07.949 [2024-10-15 11:09:48.337952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.337984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.949 [2024-10-15 11:09:48.338039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.338056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.949 #17 NEW cov: 12407 ft: 15126 corp: 16/28b lim: 5 exec/s: 17 rss: 74Mb L: 2/5 MS: 1 ChangeASCIIInt- 00:10:07.949 [2024-10-15 11:09:48.428161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.428194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.949 #18 NEW cov: 12407 ft: 15140 corp: 17/29b lim: 5 exec/s: 18 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:10:07.949 [2024-10-15 11:09:48.478507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.478538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.949 [2024-10-15 11:09:48.478587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.478607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.949 [2024-10-15 11:09:48.478637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.478653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.949 [2024-10-15 11:09:48.478682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.478698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:07.949 [2024-10-15 11:09:48.478727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.478743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:07.949 #19 NEW cov: 12407 ft: 15215 corp: 18/34b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:10:07.949 [2024-10-15 11:09:48.568516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.949 [2024-10-15 11:09:48.568547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.618624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.618655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.208 #21 NEW cov: 12407 ft: 15230 corp: 19/35b lim: 5 exec/s: 21 rss: 74Mb L: 1/5 MS: 2 ChangeBinInt-ChangeBit- 00:10:08.208 [2024-10-15 11:09:48.668869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.668901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.668950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.668967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.668996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.669012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.208 #22 NEW cov: 12407 ft: 15383 corp: 20/38b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:10:08.208 [2024-10-15 11:09:48.729144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.729177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.729211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.729227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.729261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.729277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.208 [2024-10-15 11:09:48.729323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.208 [2024-10-15 11:09:48.729339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:08.208 #23 NEW cov: 12407 ft: 15439 corp: 21/42b lim: 5 exec/s: 23 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:10:08.208 [2024-10-15 11:09:48.819204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.209 [2024-10-15 11:09:48.819237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.467 #24 NEW cov: 12407 ft: 15461 corp: 22/43b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:10:08.467 [2024-10-15 11:09:48.909461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.467 [2024-10-15 11:09:48.909493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.467 #25 NEW cov: 12407 ft: 15511 corp: 23/44b lim: 5 exec/s: 25 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:10:08.467 [2024-10-15 11:09:48.999741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.467 [2024-10-15 11:09:48.999771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.467 [2024-10-15 11:09:48.999820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.467 [2024-10-15 11:09:48.999836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.467 #26 NEW cov: 12407 ft: 15520 corp: 24/46b lim: 5 exec/s: 13 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:10:08.467 #26 DONE cov: 12407 ft: 15520 corp: 24/46b lim: 5 exec/s: 13 rss: 74Mb 00:10:08.467 Done 26 runs in 2 second(s) 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:08.727 11:09:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:10:08.727 [2024-10-15 11:09:49.224117] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:08.727 [2024-10-15 11:09:49.224211] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717289 ] 00:10:08.986 [2024-10-15 11:09:49.408903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.986 [2024-10-15 11:09:49.447057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.986 [2024-10-15 11:09:49.505947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.986 [2024-10-15 11:09:49.522104] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:10:08.986 INFO: Running with entropic power schedule (0xFF, 100). 00:10:08.986 INFO: Seed: 1949470052 00:10:08.986 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:08.986 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:08.986 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:10:08.986 INFO: A corpus is not provided, starting from an empty corpus 00:10:08.986 #2 INITED exec/s: 0 rss: 65Mb 00:10:08.986 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:08.986 This may also happen if the target rejected all inputs we tried so far 00:10:08.986 [2024-10-15 11:09:49.567069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:08.986 [2024-10-15 11:09:49.567106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.986 [2024-10-15 11:09:49.567141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:08.986 [2024-10-15 11:09:49.567158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:09.503 NEW_FUNC[1/713]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:10:09.503 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:09.503 #23 NEW cov: 12202 ft: 12201 corp: 2/17b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:10:09.503 [2024-10-15 11:09:49.918414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b1a0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.503 [2024-10-15 11:09:49.918460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.503 NEW_FUNC[1/1]: 0xfb39e8 in rte_get_tsc_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:61 00:10:09.503 #33 NEW cov: 12316 ft: 13122 corp: 3/26b lim: 40 exec/s: 0 rss: 73Mb L: 9/16 MS: 5 CrossOver-ChangeByte-ChangeBit-ShuffleBytes-CrossOver- 00:10:09.503 [2024-10-15 11:09:49.978422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bffff0a cdw11:ff1affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.503 [2024-10-15 11:09:49.978462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.503 #39 NEW cov: 12322 ft: 13339 corp: 4/35b lim: 40 exec/s: 0 rss: 73Mb L: 9/16 MS: 1 ShuffleBytes- 00:10:09.503 [2024-10-15 11:09:50.068732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.503 [2024-10-15 11:09:50.068775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.503 #41 NEW cov: 12407 ft: 13583 corp: 5/44b lim: 40 exec/s: 0 rss: 73Mb L: 9/16 MS: 2 CrossOver-CrossOver- 00:10:09.503 [2024-10-15 11:09:50.128864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bffff0a cdw11:7aff1aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.503 [2024-10-15 11:09:50.128904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.761 #42 NEW cov: 12407 ft: 13686 corp: 6/54b lim: 40 exec/s: 0 rss: 73Mb L: 10/16 MS: 1 InsertByte- 00:10:09.761 [2024-10-15 11:09:50.219058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bffff0a cdw11:ff1aff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.761 [2024-10-15 11:09:50.219093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.762 #43 NEW cov: 12407 ft: 13835 corp: 7/63b lim: 40 exec/s: 0 rss: 73Mb L: 9/16 MS: 1 ChangeBit- 00:10:09.762 [2024-10-15 11:09:50.279215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff0affff cdw11:ffffc3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.762 [2024-10-15 11:09:50.279248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.762 #44 NEW cov: 12407 ft: 13965 corp: 8/73b lim: 40 exec/s: 0 rss: 73Mb L: 10/16 MS: 1 InsertByte- 00:10:09.762 [2024-10-15 11:09:50.369457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff0a24ff cdw11:ffffc3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.762 [2024-10-15 11:09:50.369489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.020 #45 NEW cov: 12407 ft: 14021 corp: 9/83b lim: 40 exec/s: 0 rss: 73Mb L: 10/16 MS: 1 ChangeByte- 00:10:10.020 [2024-10-15 11:09:50.459775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c00affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.020 [2024-10-15 11:09:50.459808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.020 [2024-10-15 11:09:50.459844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.020 [2024-10-15 11:09:50.459860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.020 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:10.020 #46 NEW cov: 12424 ft: 14055 corp: 10/100b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertByte- 00:10:10.020 [2024-10-15 11:09:50.549909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bffff0a cdw11:7a08e500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.020 [2024-10-15 11:09:50.549941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.020 #47 NEW cov: 12424 ft: 14142 corp: 11/110b lim: 40 exec/s: 47 rss: 73Mb L: 10/17 MS: 1 ChangeBinInt- 00:10:10.020 [2024-10-15 11:09:50.640182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:09ff0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.020 [2024-10-15 11:09:50.640216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.279 #53 NEW cov: 12424 ft: 14185 corp: 12/120b lim: 40 exec/s: 53 rss: 73Mb L: 10/17 MS: 1 InsertByte- 00:10:10.279 [2024-10-15 11:09:50.700323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.279 [2024-10-15 11:09:50.700353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.279 #54 NEW cov: 12424 ft: 14210 corp: 13/130b lim: 40 exec/s: 54 rss: 73Mb L: 10/17 MS: 1 EraseBytes- 00:10:10.279 [2024-10-15 11:09:50.760584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff0a24ff cdw11:ffffc3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.279 [2024-10-15 11:09:50.760616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.279 [2024-10-15 11:09:50.760652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff5858 cdw11:58585858 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.279 [2024-10-15 11:09:50.760669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.279 [2024-10-15 11:09:50.760701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:58585858 cdw11:58585858 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.279 [2024-10-15 11:09:50.760717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.279 #55 NEW cov: 12424 ft: 14448 corp: 14/156b lim: 40 exec/s: 55 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:10:10.279 [2024-10-15 11:09:50.850734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b7aff0a cdw11:08ffe500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.279 [2024-10-15 11:09:50.850766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.538 #61 NEW cov: 12424 ft: 14483 corp: 15/166b lim: 40 exec/s: 61 rss: 73Mb L: 10/26 MS: 1 ShuffleBytes- 00:10:10.538 [2024-10-15 11:09:50.940977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f70a24ff cdw11:ffffc3ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.538 [2024-10-15 11:09:50.941010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.538 #62 NEW cov: 12424 ft: 14491 corp: 16/176b lim: 40 exec/s: 62 rss: 73Mb L: 10/26 MS: 1 ChangeByte- 00:10:10.538 [2024-10-15 11:09:51.001048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.538 [2024-10-15 11:09:51.001078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.538 #65 NEW cov: 12424 ft: 14501 corp: 17/187b lim: 40 exec/s: 65 rss: 73Mb L: 11/26 MS: 3 InsertByte-ShuffleBytes-CrossOver- 00:10:10.538 [2024-10-15 11:09:51.051175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.538 [2024-10-15 11:09:51.051205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.538 #66 NEW cov: 12424 ft: 14520 corp: 18/197b lim: 40 exec/s: 66 rss: 74Mb L: 10/26 MS: 1 ChangeBinInt- 00:10:10.539 [2024-10-15 11:09:51.141456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b7aff0a cdw11:08e50000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.539 [2024-10-15 11:09:51.141486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.797 #72 NEW cov: 12424 ft: 14554 corp: 19/206b lim: 40 exec/s: 72 rss: 74Mb L: 9/26 MS: 1 EraseBytes- 00:10:10.797 [2024-10-15 11:09:51.231793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff0a24ff cdw11:23ffffc3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.231824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.797 [2024-10-15 11:09:51.231874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff58 cdw11:58585858 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.231889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.797 [2024-10-15 11:09:51.231920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:58585858 cdw11:58585858 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.231936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.797 #73 NEW cov: 12424 ft: 14563 corp: 20/233b lim: 40 exec/s: 73 rss: 74Mb L: 27/27 MS: 1 InsertByte- 00:10:10.797 [2024-10-15 11:09:51.321913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bfbff0a cdw11:ff1affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.321944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.797 [2024-10-15 11:09:51.372081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5bfbff0a cdw11:ff1a5bff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.372113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.797 [2024-10-15 11:09:51.372163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0a7a08 cdw11:e500ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.797 [2024-10-15 11:09:51.372180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.797 #75 NEW cov: 12424 ft: 14636 corp: 21/252b lim: 40 exec/s: 75 rss: 74Mb L: 19/27 MS: 2 ChangeBit-CrossOver- 00:10:11.056 [2024-10-15 11:09:51.432178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b1a0aff cdw11:ffffff2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.056 [2024-10-15 11:09:51.432208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.056 #76 NEW cov: 12431 ft: 14652 corp: 22/262b lim: 40 exec/s: 76 rss: 74Mb L: 10/27 MS: 1 InsertByte- 00:10:11.056 [2024-10-15 11:09:51.492366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.056 [2024-10-15 11:09:51.492397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.056 #77 NEW cov: 12431 ft: 14668 corp: 23/270b lim: 40 exec/s: 77 rss: 74Mb L: 8/27 MS: 1 EraseBytes- 00:10:11.056 [2024-10-15 11:09:51.542492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff20 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.056 [2024-10-15 11:09:51.542523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.056 #78 NEW cov: 12431 ft: 14694 corp: 24/280b lim: 40 exec/s: 39 rss: 74Mb L: 10/27 MS: 1 ChangeByte- 00:10:11.056 #78 DONE cov: 12431 ft: 14694 corp: 24/280b lim: 40 exec/s: 39 rss: 74Mb 00:10:11.056 Done 78 runs in 2 second(s) 00:10:11.056 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:11.315 11:09:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:10:11.315 [2024-10-15 11:09:51.724360] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:11.315 [2024-10-15 11:09:51.724429] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717649 ] 00:10:11.315 [2024-10-15 11:09:51.903333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.315 [2024-10-15 11:09:51.941510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.574 [2024-10-15 11:09:52.000438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.574 [2024-10-15 11:09:52.016597] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:10:11.574 INFO: Running with entropic power schedule (0xFF, 100). 00:10:11.574 INFO: Seed: 147503940 00:10:11.574 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:11.574 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:11.574 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:10:11.574 INFO: A corpus is not provided, starting from an empty corpus 00:10:11.574 #2 INITED exec/s: 0 rss: 66Mb 00:10:11.574 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:11.574 This may also happen if the target rejected all inputs we tried so far 00:10:11.574 [2024-10-15 11:09:52.072279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.574 [2024-10-15 11:09:52.072308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.574 [2024-10-15 11:09:52.072378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.574 [2024-10-15 11:09:52.072393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:11.833 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:10:11.833 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:11.833 #10 NEW cov: 12215 ft: 12214 corp: 2/17b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 3 ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:10:11.833 [2024-10-15 11:09:52.393100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.833 [2024-10-15 11:09:52.393137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.833 [2024-10-15 11:09:52.393196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.833 [2024-10-15 11:09:52.393210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:11.833 #11 NEW cov: 12328 ft: 12618 corp: 3/33b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 ChangeBinInt- 00:10:11.833 [2024-10-15 11:09:52.453181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.833 [2024-10-15 11:09:52.453209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.833 [2024-10-15 11:09:52.453268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:11.833 [2024-10-15 11:09:52.453282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.091 #12 NEW cov: 12334 ft: 12822 corp: 4/49b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 1 ChangeBinInt- 00:10:12.091 [2024-10-15 11:09:52.513453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.091 [2024-10-15 11:09:52.513482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.091 [2024-10-15 11:09:52.513557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000007ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.091 [2024-10-15 11:09:52.513572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.092 [2024-10-15 11:09:52.513629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.513642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.092 #13 NEW cov: 12419 ft: 13464 corp: 5/78b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:10:12.092 [2024-10-15 11:09:52.573461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.573487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.092 [2024-10-15 11:09:52.573546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.573560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.092 #19 NEW cov: 12419 ft: 13643 corp: 6/94b lim: 40 exec/s: 0 rss: 73Mb L: 16/29 MS: 1 ChangeBit- 00:10:12.092 [2024-10-15 11:09:52.613576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.613606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.092 [2024-10-15 11:09:52.613667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.613682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.092 #20 NEW cov: 12419 ft: 13704 corp: 7/114b lim: 40 exec/s: 0 rss: 73Mb L: 20/29 MS: 1 CrossOver- 00:10:12.092 [2024-10-15 11:09:52.653861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.653888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.092 [2024-10-15 11:09:52.653962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000007ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.653977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.092 [2024-10-15 11:09:52.654041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.654056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.092 #21 NEW cov: 12419 ft: 13825 corp: 8/143b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 CrossOver- 00:10:12.092 [2024-10-15 11:09:52.713715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.092 [2024-10-15 11:09:52.713742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.350 #32 NEW cov: 12419 ft: 14642 corp: 9/156b lim: 40 exec/s: 0 rss: 74Mb L: 13/29 MS: 1 EraseBytes- 00:10:12.350 [2024-10-15 11:09:52.753993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.350 [2024-10-15 11:09:52.754018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.350 [2024-10-15 11:09:52.754083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:07ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.754097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.351 #33 NEW cov: 12419 ft: 14680 corp: 10/173b lim: 40 exec/s: 0 rss: 74Mb L: 17/29 MS: 1 InsertByte- 00:10:12.351 [2024-10-15 11:09:52.793949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.793974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.351 #34 NEW cov: 12419 ft: 14718 corp: 11/186b lim: 40 exec/s: 0 rss: 74Mb L: 13/29 MS: 1 CMP- DE: "\000\010"- 00:10:12.351 [2024-10-15 11:09:52.854116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.854141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.351 #35 NEW cov: 12419 ft: 14811 corp: 12/199b lim: 40 exec/s: 0 rss: 74Mb L: 13/29 MS: 1 EraseBytes- 00:10:12.351 [2024-10-15 11:09:52.894239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00fb0000 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.894266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.351 #36 NEW cov: 12419 ft: 14821 corp: 13/212b lim: 40 exec/s: 0 rss: 74Mb L: 13/29 MS: 1 ChangeBinInt- 00:10:12.351 [2024-10-15 11:09:52.954562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.954587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.351 [2024-10-15 11:09:52.954647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:08fffffc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.351 [2024-10-15 11:09:52.954660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.351 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:12.351 #37 NEW cov: 12442 ft: 14927 corp: 14/228b lim: 40 exec/s: 0 rss: 74Mb L: 16/29 MS: 1 CopyPart- 00:10:12.609 [2024-10-15 11:09:52.995185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:008f8f8f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.609 [2024-10-15 11:09:52.995210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.609 [2024-10-15 11:09:52.995285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:8f8f8f8f cdw11:8f8f8f8f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.609 [2024-10-15 11:09:52.995299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:52.995357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:8f8f8f8f cdw11:8f8f8f8f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:52.995371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:52.995426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:8f8f8f8f cdw11:8f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:52.995440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:52.995497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:52.995510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:12.610 #38 NEW cov: 12442 ft: 15308 corp: 15/268b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:10:12.610 [2024-10-15 11:09:53.034962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.034987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.035045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000007ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.035059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.035117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:bfffff00 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.035131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.610 #39 NEW cov: 12442 ft: 15321 corp: 16/297b lim: 40 exec/s: 39 rss: 74Mb L: 29/40 MS: 1 ChangeBit- 00:10:12.610 [2024-10-15 11:09:53.094994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.095020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.095098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.095113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 #40 NEW cov: 12442 ft: 15347 corp: 17/315b lim: 40 exec/s: 40 rss: 74Mb L: 18/40 MS: 1 PersAutoDict- DE: "\000\010"- 00:10:12.610 [2024-10-15 11:09:53.155306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.155331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.155407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.155421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.155479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffffc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.155492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.610 #41 NEW cov: 12442 ft: 15365 corp: 18/339b lim: 40 exec/s: 41 rss: 74Mb L: 24/40 MS: 1 InsertRepeatedBytes- 00:10:12.610 [2024-10-15 11:09:53.195248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.195273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.195334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ddffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.195348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 #42 NEW cov: 12442 ft: 15384 corp: 19/356b lim: 40 exec/s: 42 rss: 74Mb L: 17/40 MS: 1 InsertByte- 00:10:12.610 [2024-10-15 11:09:53.235708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.235733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.235793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.235808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.235864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.235878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.610 [2024-10-15 11:09:53.235936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.610 [2024-10-15 11:09:53.235955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:12.869 #46 NEW cov: 12442 ft: 15416 corp: 20/392b lim: 40 exec/s: 46 rss: 74Mb L: 36/40 MS: 4 InsertByte-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:10:12.869 [2024-10-15 11:09:53.275801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.275826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.275887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.275900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.275957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.275971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.276032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.276046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:12.869 #47 NEW cov: 12442 ft: 15429 corp: 21/431b lim: 40 exec/s: 47 rss: 74Mb L: 39/40 MS: 1 CrossOver- 00:10:12.869 [2024-10-15 11:09:53.335447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.335472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.869 #48 NEW cov: 12442 ft: 15436 corp: 22/444b lim: 40 exec/s: 48 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:10:12.869 [2024-10-15 11:09:53.375587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.375612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.869 #49 NEW cov: 12442 ft: 15456 corp: 23/457b lim: 40 exec/s: 49 rss: 74Mb L: 13/40 MS: 1 ShuffleBytes- 00:10:12.869 [2024-10-15 11:09:53.416215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.416241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.416302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.416316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.416377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.416391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.416448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.416462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:12.869 #50 NEW cov: 12442 ft: 15482 corp: 24/496b lim: 40 exec/s: 50 rss: 74Mb L: 39/40 MS: 1 CopyPart- 00:10:12.869 [2024-10-15 11:09:53.476016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.476045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:12.869 [2024-10-15 11:09:53.476120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dd89ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:12.869 [2024-10-15 11:09:53.476134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.128 #51 NEW cov: 12442 ft: 15496 corp: 25/513b lim: 40 exec/s: 51 rss: 74Mb L: 17/40 MS: 1 ChangeByte- 00:10:13.128 [2024-10-15 11:09:53.536343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.536368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.536443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.536458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.536519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:07ffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.536533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:13.128 #52 NEW cov: 12442 ft: 15516 corp: 26/544b lim: 40 exec/s: 52 rss: 74Mb L: 31/40 MS: 1 PersAutoDict- DE: "\000\010"- 00:10:13.128 [2024-10-15 11:09:53.576155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.576180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 #53 NEW cov: 12442 ft: 15560 corp: 27/557b lim: 40 exec/s: 53 rss: 74Mb L: 13/40 MS: 1 InsertRepeatedBytes- 00:10:13.128 [2024-10-15 11:09:53.616451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00003200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.616477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.616537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.616552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.128 #54 NEW cov: 12442 ft: 15608 corp: 28/573b lim: 40 exec/s: 54 rss: 74Mb L: 16/40 MS: 1 ChangeByte- 00:10:13.128 [2024-10-15 11:09:53.656744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.656769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.656843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.656857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.656913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.656930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:13.128 #55 NEW cov: 12442 ft: 15632 corp: 29/601b lim: 40 exec/s: 55 rss: 74Mb L: 28/40 MS: 1 CrossOver- 00:10:13.128 [2024-10-15 11:09:53.716936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.716962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.717021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000007ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.717039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.717098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:0001ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.717112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:13.128 #56 NEW cov: 12442 ft: 15641 corp: 30/630b lim: 40 exec/s: 56 rss: 74Mb L: 29/40 MS: 1 ChangeBit- 00:10:13.128 [2024-10-15 11:09:53.756859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.756886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.128 [2024-10-15 11:09:53.756943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00006000 cdw11:000000fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.128 [2024-10-15 11:09:53.756958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.387 #57 NEW cov: 12442 ft: 15684 corp: 31/646b lim: 40 exec/s: 57 rss: 74Mb L: 16/40 MS: 1 ChangeByte- 00:10:13.387 [2024-10-15 11:09:53.796866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.796891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.388 #58 NEW cov: 12442 ft: 15727 corp: 32/659b lim: 40 exec/s: 58 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:10:13.388 [2024-10-15 11:09:53.836949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.836975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.388 #59 NEW cov: 12442 ft: 15737 corp: 33/672b lim: 40 exec/s: 59 rss: 74Mb L: 13/40 MS: 1 ChangeByte- 00:10:13.388 [2024-10-15 11:09:53.877041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.877067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.388 #60 NEW cov: 12442 ft: 15760 corp: 34/685b lim: 40 exec/s: 60 rss: 74Mb L: 13/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\020"- 00:10:13.388 [2024-10-15 11:09:53.937362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00f900f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.937387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.388 [2024-10-15 11:09:53.937441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dd89ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.937458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.388 #61 NEW cov: 12442 ft: 15773 corp: 35/702b lim: 40 exec/s: 61 rss: 75Mb L: 17/40 MS: 1 ChangeBinInt- 00:10:13.388 [2024-10-15 11:09:53.997560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.997586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.388 [2024-10-15 11:09:53.997659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0007ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.388 [2024-10-15 11:09:53.997673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.647 #62 NEW cov: 12442 ft: 15780 corp: 36/720b lim: 40 exec/s: 62 rss: 75Mb L: 18/40 MS: 1 PersAutoDict- DE: "\000\010"- 00:10:13.647 [2024-10-15 11:09:54.037803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.647 [2024-10-15 11:09:54.037828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:13.647 [2024-10-15 11:09:54.037902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000007ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.647 [2024-10-15 11:09:54.037916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:13.647 [2024-10-15 11:09:54.037970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:bfffff00 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:13.647 [2024-10-15 11:09:54.037984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:13.647 #68 NEW cov: 12442 ft: 15792 corp: 37/747b lim: 40 exec/s: 34 rss: 75Mb L: 27/40 MS: 1 EraseBytes- 00:10:13.647 #68 DONE cov: 12442 ft: 15792 corp: 37/747b lim: 40 exec/s: 34 rss: 75Mb 00:10:13.647 ###### Recommended dictionary. ###### 00:10:13.647 "\000\010" # Uses: 3 00:10:13.647 "\001\000\000\000\000\000\000\020" # Uses: 0 00:10:13.647 ###### End of recommended dictionary. ###### 00:10:13.647 Done 68 runs in 2 second(s) 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:13.647 11:09:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:10:13.648 [2024-10-15 11:09:54.225832] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:13.648 [2024-10-15 11:09:54.225907] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718005 ] 00:10:13.907 [2024-10-15 11:09:54.406909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.907 [2024-10-15 11:09:54.445324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.907 [2024-10-15 11:09:54.504536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.907 [2024-10-15 11:09:54.520695] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:10:13.907 INFO: Running with entropic power schedule (0xFF, 100). 00:10:13.907 INFO: Seed: 2651513923 00:10:14.167 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:14.167 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:14.167 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:14.167 INFO: A corpus is not provided, starting from an empty corpus 00:10:14.167 #2 INITED exec/s: 0 rss: 65Mb 00:10:14.167 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:14.167 This may also happen if the target rejected all inputs we tried so far 00:10:14.168 [2024-10-15 11:09:54.586705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.168 [2024-10-15 11:09:54.586735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.168 [2024-10-15 11:09:54.586795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.168 [2024-10-15 11:09:54.586810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.168 [2024-10-15 11:09:54.586867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.168 [2024-10-15 11:09:54.586880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.168 [2024-10-15 11:09:54.586937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.168 [2024-10-15 11:09:54.586950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.427 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:10:14.427 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:14.427 #7 NEW cov: 12213 ft: 12214 corp: 2/39b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 5 ShuffleBytes-InsertByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:10:14.427 [2024-10-15 11:09:54.907589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.427 [2024-10-15 11:09:54.907636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.427 [2024-10-15 11:09:54.907714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.907733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:54.907797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.907814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:54.907878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.907896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.428 #13 NEW cov: 12326 ft: 12754 corp: 3/78b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:10:14.428 [2024-10-15 11:09:54.947523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.947550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:54.947621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.947636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:54.947690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.947704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:54.947760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:54.947774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.428 #14 NEW cov: 12332 ft: 13082 corp: 4/116b lim: 40 exec/s: 0 rss: 74Mb L: 38/39 MS: 1 ShuffleBytes- 00:10:14.428 [2024-10-15 11:09:55.007383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.007410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:55.007485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.007499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.428 #15 NEW cov: 12417 ft: 13666 corp: 5/134b lim: 40 exec/s: 0 rss: 74Mb L: 18/39 MS: 1 CrossOver- 00:10:14.428 [2024-10-15 11:09:55.047786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.047811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:55.047886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.047904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:55.047960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.047974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.428 [2024-10-15 11:09:55.048034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.428 [2024-10-15 11:09:55.048049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.688 #16 NEW cov: 12417 ft: 13799 corp: 6/170b lim: 40 exec/s: 0 rss: 74Mb L: 36/39 MS: 1 CrossOver- 00:10:14.688 [2024-10-15 11:09:55.087904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.087930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.088003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.088019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.088078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.088092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.088148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.088172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.688 #22 NEW cov: 12417 ft: 13885 corp: 7/208b lim: 40 exec/s: 0 rss: 74Mb L: 38/39 MS: 1 ChangeBit- 00:10:14.688 [2024-10-15 11:09:55.148078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.148105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.148163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.148177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.148231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000b1 cdw11:4d02df55 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.148245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.148298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ae2b0000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.148312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.688 #23 NEW cov: 12417 ft: 13971 corp: 8/246b lim: 40 exec/s: 0 rss: 74Mb L: 38/39 MS: 1 CMP- DE: "\261M\002\337U\256+\000"- 00:10:14.688 [2024-10-15 11:09:55.207920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.207950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.208008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.208022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.688 #24 NEW cov: 12417 ft: 14063 corp: 9/267b lim: 40 exec/s: 0 rss: 74Mb L: 21/39 MS: 1 CrossOver- 00:10:14.688 [2024-10-15 11:09:55.268430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.268456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.268531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.268545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.268603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.268616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.688 [2024-10-15 11:09:55.268673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-15 11:09:55.268686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.688 #25 NEW cov: 12417 ft: 14145 corp: 10/303b lim: 40 exec/s: 0 rss: 74Mb L: 36/39 MS: 1 ChangeBit- 00:10:14.947 [2024-10-15 11:09:55.328238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.328264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.328339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.328354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.947 #26 NEW cov: 12417 ft: 14254 corp: 11/324b lim: 40 exec/s: 0 rss: 74Mb L: 21/39 MS: 1 CopyPart- 00:10:14.947 [2024-10-15 11:09:55.388421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.388446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.388502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.388516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.947 #27 NEW cov: 12417 ft: 14333 corp: 12/342b lim: 40 exec/s: 0 rss: 74Mb L: 18/39 MS: 1 ChangeByte- 00:10:14.947 [2024-10-15 11:09:55.428500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.428525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.428583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.428597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.947 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:14.947 #28 NEW cov: 12440 ft: 14396 corp: 13/363b lim: 40 exec/s: 0 rss: 74Mb L: 21/39 MS: 1 CrossOver- 00:10:14.947 [2024-10-15 11:09:55.468970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.468995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.469056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.469070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.469142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000b1 cdw11:4d02df55 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.469156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.469210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00002b00 cdw11:ae000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.469223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:14.947 #29 NEW cov: 12440 ft: 14429 corp: 14/401b lim: 40 exec/s: 0 rss: 74Mb L: 38/39 MS: 1 ShuffleBytes- 00:10:14.947 [2024-10-15 11:09:55.528829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.528854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:14.947 [2024-10-15 11:09:55.528911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.947 [2024-10-15 11:09:55.528925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:14.947 #30 NEW cov: 12440 ft: 14457 corp: 15/422b lim: 40 exec/s: 30 rss: 74Mb L: 21/39 MS: 1 ChangeBit- 00:10:15.206 [2024-10-15 11:09:55.589012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.589041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.589114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.589129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.206 #31 NEW cov: 12440 ft: 14467 corp: 16/440b lim: 40 exec/s: 31 rss: 74Mb L: 18/39 MS: 1 ShuffleBytes- 00:10:15.206 [2024-10-15 11:09:55.629099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.629125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.629182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.629199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.206 #32 NEW cov: 12440 ft: 14508 corp: 17/461b lim: 40 exec/s: 32 rss: 74Mb L: 21/39 MS: 1 ShuffleBytes- 00:10:15.206 [2024-10-15 11:09:55.689246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000020 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.689271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.689344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.689358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.206 #33 NEW cov: 12440 ft: 14523 corp: 18/482b lim: 40 exec/s: 33 rss: 74Mb L: 21/39 MS: 1 ChangeBit- 00:10:15.206 [2024-10-15 11:09:55.729714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.729738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.729811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.729826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.729882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.206 [2024-10-15 11:09:55.729896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.206 [2024-10-15 11:09:55.729950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.207 [2024-10-15 11:09:55.729964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.207 #34 NEW cov: 12440 ft: 14533 corp: 19/516b lim: 40 exec/s: 34 rss: 74Mb L: 34/39 MS: 1 CrossOver- 00:10:15.207 [2024-10-15 11:09:55.789551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:288f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.207 [2024-10-15 11:09:55.789577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.207 [2024-10-15 11:09:55.789652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.207 [2024-10-15 11:09:55.789666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.207 #35 NEW cov: 12440 ft: 14552 corp: 20/534b lim: 40 exec/s: 35 rss: 74Mb L: 18/39 MS: 1 ChangeByte- 00:10:15.207 [2024-10-15 11:09:55.829672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000020 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.207 [2024-10-15 11:09:55.829696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.207 [2024-10-15 11:09:55.829772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.207 [2024-10-15 11:09:55.829786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.466 #36 NEW cov: 12440 ft: 14596 corp: 21/555b lim: 40 exec/s: 36 rss: 75Mb L: 21/39 MS: 1 ChangeBit- 00:10:15.466 [2024-10-15 11:09:55.890300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.890325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.890383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:008f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.890397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.890453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.890466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.890522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.890535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.890591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.890604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:15.466 #37 NEW cov: 12440 ft: 14653 corp: 22/595b lim: 40 exec/s: 37 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:10:15.466 [2024-10-15 11:09:55.950288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.950312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.950386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.950401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.950456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000b1 cdw11:4d02df75 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.950470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.950526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ae2b0000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.950540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.466 #38 NEW cov: 12440 ft: 14660 corp: 23/633b lim: 40 exec/s: 38 rss: 75Mb L: 38/40 MS: 1 ChangeBit- 00:10:15.466 [2024-10-15 11:09:55.990585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.990610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.990684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:008f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.990699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.466 [2024-10-15 11:09:55.990758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.466 [2024-10-15 11:09:55.990771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.467 [2024-10-15 11:09:55.990826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.467 [2024-10-15 11:09:55.990839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.467 [2024-10-15 11:09:55.990893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.467 [2024-10-15 11:09:55.990907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:15.467 #39 NEW cov: 12440 ft: 14676 corp: 24/673b lim: 40 exec/s: 39 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:10:15.467 [2024-10-15 11:09:56.050142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f00f1f1 cdw11:f1f1f1f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.467 [2024-10-15 11:09:56.050167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.467 #42 NEW cov: 12440 ft: 15365 corp: 25/687b lim: 40 exec/s: 42 rss: 75Mb L: 14/40 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:10:15.726 [2024-10-15 11:09:56.110773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.110798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.110853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.110868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.110921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000b1 cdw11:4d02df55 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.110934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.110989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ae2b0000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.111002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.726 #43 NEW cov: 12440 ft: 15391 corp: 26/725b lim: 40 exec/s: 43 rss: 75Mb L: 38/40 MS: 1 ChangeBit- 00:10:15.726 [2024-10-15 11:09:56.150543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8fb14d cdw11:02df55ae SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.150568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.150643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.150657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.726 #44 NEW cov: 12440 ft: 15417 corp: 27/746b lim: 40 exec/s: 44 rss: 75Mb L: 21/40 MS: 1 PersAutoDict- DE: "\261M\002\337U\256+\000"- 00:10:15.726 [2024-10-15 11:09:56.211227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.211256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.211314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:008f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.211328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.211385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.726 [2024-10-15 11:09:56.211399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.726 [2024-10-15 11:09:56.211453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.211467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.211522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.211537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:15.727 #45 NEW cov: 12440 ft: 15451 corp: 28/786b lim: 40 exec/s: 45 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:10:15.727 [2024-10-15 11:09:56.251136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.251161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.251237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000d90a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.251252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.251310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.251324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.251380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.251393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.727 #46 NEW cov: 12440 ft: 15474 corp: 29/820b lim: 40 exec/s: 46 rss: 75Mb L: 34/40 MS: 1 ChangeByte- 00:10:15.727 [2024-10-15 11:09:56.291110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.291136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.291224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.291239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.291297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.291314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.727 #47 NEW cov: 12440 ft: 15668 corp: 30/848b lim: 40 exec/s: 47 rss: 75Mb L: 28/40 MS: 1 EraseBytes- 00:10:15.727 [2024-10-15 11:09:56.331263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.331289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.331348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:000000b1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.331363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.727 [2024-10-15 11:09:56.331418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4d02df55 cdw11:ae2b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.727 [2024-10-15 11:09:56.331431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 #48 NEW cov: 12440 ft: 15724 corp: 31/876b lim: 40 exec/s: 48 rss: 75Mb L: 28/40 MS: 1 PersAutoDict- DE: "\261M\002\337U\256+\000"- 00:10:15.987 [2024-10-15 11:09:56.391616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.391641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.391718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.391733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.391791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.391805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.391860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.391874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.987 #49 NEW cov: 12440 ft: 15745 corp: 32/914b lim: 40 exec/s: 49 rss: 75Mb L: 38/40 MS: 1 ChangeBinInt- 00:10:15.987 [2024-10-15 11:09:56.431532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.431559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.431632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:008f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.431646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.431703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.431717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 #50 NEW cov: 12440 ft: 15750 corp: 33/944b lim: 40 exec/s: 50 rss: 75Mb L: 30/40 MS: 1 EraseBytes- 00:10:15.987 [2024-10-15 11:09:56.471810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:8f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.471836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.471910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.471925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.471980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00002600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.471994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.472054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.472069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.987 #56 NEW cov: 12440 ft: 15761 corp: 34/982b lim: 40 exec/s: 56 rss: 75Mb L: 38/40 MS: 1 ChangeBinInt- 00:10:15.987 [2024-10-15 11:09:56.512076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.512101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.512158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:008f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.512172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.512241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.512254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.512308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.512322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.512377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.512391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:15.987 #62 NEW cov: 12440 ft: 15773 corp: 35/1022b lim: 40 exec/s: 62 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:10:15.987 [2024-10-15 11:09:56.572109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a8f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.572135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.572220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.572235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.572291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.572308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:15.987 [2024-10-15 11:09:56.572363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.987 [2024-10-15 11:09:56.572376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:15.987 #63 NEW cov: 12440 ft: 15798 corp: 36/1056b lim: 40 exec/s: 31 rss: 75Mb L: 34/40 MS: 1 ShuffleBytes- 00:10:15.987 #63 DONE cov: 12440 ft: 15798 corp: 36/1056b lim: 40 exec/s: 31 rss: 75Mb 00:10:15.987 ###### Recommended dictionary. ###### 00:10:15.987 "\261M\002\337U\256+\000" # Uses: 2 00:10:15.987 ###### End of recommended dictionary. ###### 00:10:15.987 Done 63 runs in 2 second(s) 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:16.247 11:09:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:10:16.247 [2024-10-15 11:09:56.739199] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:16.248 [2024-10-15 11:09:56.739272] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718339 ] 00:10:16.507 [2024-10-15 11:09:56.921747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.507 [2024-10-15 11:09:56.960645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.507 [2024-10-15 11:09:57.019656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.507 [2024-10-15 11:09:57.035818] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:10:16.507 INFO: Running with entropic power schedule (0xFF, 100). 00:10:16.507 INFO: Seed: 872556330 00:10:16.507 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:16.507 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:16.507 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:16.507 INFO: A corpus is not provided, starting from an empty corpus 00:10:16.507 #2 INITED exec/s: 0 rss: 66Mb 00:10:16.507 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:16.507 This may also happen if the target rejected all inputs we tried so far 00:10:16.507 [2024-10-15 11:09:57.091735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:16.507 [2024-10-15 11:09:57.091764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:16.507 [2024-10-15 11:09:57.091820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:16.507 [2024-10-15 11:09:57.091835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:16.507 [2024-10-15 11:09:57.091889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:16.507 [2024-10-15 11:09:57.091903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:16.507 [2024-10-15 11:09:57.091959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:16.507 [2024-10-15 11:09:57.091972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.076 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:10:17.076 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:17.076 #5 NEW cov: 12201 ft: 12200 corp: 2/36b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 3 CopyPart-CrossOver-InsertRepeatedBytes- 00:10:17.076 [2024-10-15 11:09:57.435051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.435094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.435197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.435214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.435306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.435327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.435433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.435451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.076 #16 NEW cov: 12314 ft: 12689 corp: 3/71b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:10:17.076 [2024-10-15 11:09:57.505221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.505256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.505345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.505362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.505460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.505476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.505572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858584 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.505589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.076 #17 NEW cov: 12320 ft: 13041 corp: 4/106b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:10:17.076 [2024-10-15 11:09:57.554930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.554960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.555063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.555083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.076 #23 NEW cov: 12405 ft: 13746 corp: 5/128b lim: 40 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:10:17.076 [2024-10-15 11:09:57.624938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.624966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.076 #24 NEW cov: 12405 ft: 14100 corp: 6/137b lim: 40 exec/s: 0 rss: 73Mb L: 9/35 MS: 1 CrossOver- 00:10:17.076 [2024-10-15 11:09:57.676116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.676144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.076 [2024-10-15 11:09:57.676261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.076 [2024-10-15 11:09:57.676279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.077 [2024-10-15 11:09:57.676372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.077 [2024-10-15 11:09:57.676391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.077 [2024-10-15 11:09:57.676487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.077 [2024-10-15 11:09:57.676504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.077 #25 NEW cov: 12405 ft: 14277 corp: 7/173b lim: 40 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CrossOver- 00:10:17.336 [2024-10-15 11:09:57.726344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.726371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.726482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000085 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.726499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.726595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.726612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.726710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858485 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.726726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.336 #26 NEW cov: 12405 ft: 14370 corp: 8/211b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:10:17.336 [2024-10-15 11:09:57.796693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.796722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.796821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.796837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.796946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.796963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.797070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:84858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.797087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.336 #27 NEW cov: 12405 ft: 14429 corp: 9/243b lim: 40 exec/s: 0 rss: 73Mb L: 32/38 MS: 1 EraseBytes- 00:10:17.336 [2024-10-15 11:09:57.846936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.336 [2024-10-15 11:09:57.846964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.336 [2024-10-15 11:09:57.847075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.847092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.337 [2024-10-15 11:09:57.847189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.847207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.337 [2024-10-15 11:09:57.847302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:8585857c cdw11:7a7a7985 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.847322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.337 #28 NEW cov: 12405 ft: 14464 corp: 10/278b lim: 40 exec/s: 0 rss: 73Mb L: 35/38 MS: 1 ChangeBinInt- 00:10:17.337 [2024-10-15 11:09:57.897248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.897277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.337 [2024-10-15 11:09:57.897385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000085 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.897403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.337 [2024-10-15 11:09:57.897499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:858585eb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.897516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.337 [2024-10-15 11:09:57.897614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.337 [2024-10-15 11:09:57.897632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.337 #29 NEW cov: 12405 ft: 14539 corp: 11/317b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 InsertByte- 00:10:17.596 [2024-10-15 11:09:57.967601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:57.967629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:57.967727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:57.967745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:57.967855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:8d858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:57.967873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:57.967976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858584 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:57.967993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.596 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:17.596 #30 NEW cov: 12428 ft: 14581 corp: 12/352b lim: 40 exec/s: 0 rss: 74Mb L: 35/39 MS: 1 ChangeBinInt- 00:10:17.596 [2024-10-15 11:09:58.017728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.017756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.017854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.017875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.017974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.017993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.018100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:8596857c cdw11:7a7a7985 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.018118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.596 #31 NEW cov: 12428 ft: 14647 corp: 13/387b lim: 40 exec/s: 0 rss: 74Mb L: 35/39 MS: 1 ChangeByte- 00:10:17.596 [2024-10-15 11:09:58.088094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.088123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.088224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.088241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.088341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.088359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.088450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85848585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.088468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.596 #32 NEW cov: 12428 ft: 14680 corp: 14/420b lim: 40 exec/s: 32 rss: 74Mb L: 33/39 MS: 1 CopyPart- 00:10:17.596 [2024-10-15 11:09:58.157480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a850a cdw11:0a858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.157507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.596 #33 NEW cov: 12428 ft: 14707 corp: 15/433b lim: 40 exec/s: 33 rss: 74Mb L: 13/39 MS: 1 CrossOver- 00:10:17.596 [2024-10-15 11:09:58.208711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a758585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.208737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.208833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.208850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.208948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.208966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.596 [2024-10-15 11:09:58.209075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:8585857c cdw11:7a7a7985 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.596 [2024-10-15 11:09:58.209094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.856 #34 NEW cov: 12428 ft: 14741 corp: 16/468b lim: 40 exec/s: 34 rss: 74Mb L: 35/39 MS: 1 ChangeByte- 00:10:17.856 [2024-10-15 11:09:58.259067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.259094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.259195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000085 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.259210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.259308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:858585eb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.259326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.259424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85856e84 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.259440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.856 #35 NEW cov: 12428 ft: 14749 corp: 17/507b lim: 40 exec/s: 35 rss: 74Mb L: 39/39 MS: 1 ChangeByte- 00:10:17.856 [2024-10-15 11:09:58.329475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:8585ff85 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.329502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.329612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.329630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.329721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.329738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.329844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85848585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.329860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.856 #36 NEW cov: 12428 ft: 14763 corp: 18/540b lim: 40 exec/s: 36 rss: 74Mb L: 33/39 MS: 1 InsertByte- 00:10:17.856 [2024-10-15 11:09:58.380212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.380240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.380353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85851616 cdw11:16161685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.380372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.380468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.380488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.380594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85859685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.380613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.380713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:7c7a7a79 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.380730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:17.856 #37 NEW cov: 12428 ft: 14819 corp: 19/580b lim: 40 exec/s: 37 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:10:17.856 [2024-10-15 11:09:58.450323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.450350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.450442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00002385 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.450461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.450555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.450573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:17.856 [2024-10-15 11:09:58.450670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:17.856 [2024-10-15 11:09:58.450687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:17.856 #38 NEW cov: 12428 ft: 14831 corp: 20/615b lim: 40 exec/s: 38 rss: 74Mb L: 35/40 MS: 1 ChangeBinInt- 00:10:18.115 [2024-10-15 11:09:58.499889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0acc85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.499917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.115 #39 NEW cov: 12428 ft: 14861 corp: 21/624b lim: 40 exec/s: 39 rss: 74Mb L: 9/40 MS: 1 ChangeByte- 00:10:18.115 [2024-10-15 11:09:58.571395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.571422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.571516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:858c8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.571533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.571631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.571649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.571744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:8585857c cdw11:7a7a7985 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.571762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:18.115 #40 NEW cov: 12428 ft: 14903 corp: 22/659b lim: 40 exec/s: 40 rss: 74Mb L: 35/40 MS: 1 ChangeBinInt- 00:10:18.115 [2024-10-15 11:09:58.621933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.621961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.622065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85851616 cdw11:16161685 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.622084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.622183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.622200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.622292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85859600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.622310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.622403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000079 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.622421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:18.115 #41 NEW cov: 12428 ft: 14906 corp: 23/699b lim: 40 exec/s: 41 rss: 74Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\000"- 00:10:18.115 [2024-10-15 11:09:58.692014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.692051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.692146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.692164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.692259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.692277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.692375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.692393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:18.115 #42 NEW cov: 12428 ft: 14943 corp: 24/734b lim: 40 exec/s: 42 rss: 74Mb L: 35/40 MS: 1 ShuffleBytes- 00:10:18.115 [2024-10-15 11:09:58.741951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:002bae57 cdw11:ebd5b670 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.741978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.115 [2024-10-15 11:09:58.742080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.115 [2024-10-15 11:09:58.742099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.373 #43 NEW cov: 12428 ft: 14956 corp: 25/751b lim: 40 exec/s: 43 rss: 74Mb L: 17/40 MS: 1 CMP- DE: "\000+\256W\353\325\266p"- 00:10:18.373 [2024-10-15 11:09:58.793092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.793118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.373 [2024-10-15 11:09:58.793212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.793228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.373 [2024-10-15 11:09:58.793331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.793349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:18.373 [2024-10-15 11:09:58.793445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85842685 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.793462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:18.373 #44 NEW cov: 12428 ft: 14989 corp: 26/784b lim: 40 exec/s: 44 rss: 74Mb L: 33/40 MS: 1 ChangeByte- 00:10:18.373 [2024-10-15 11:09:58.862649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a850a cdw11:0a858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.862675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.373 #45 NEW cov: 12428 ft: 15068 corp: 27/797b lim: 40 exec/s: 45 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:10:18.373 [2024-10-15 11:09:58.933259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:002bae57 cdw11:eb110000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.933285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.373 [2024-10-15 11:09:58.933391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000a8585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.373 [2024-10-15 11:09:58.933408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.374 #46 NEW cov: 12428 ft: 15137 corp: 28/814b lim: 40 exec/s: 46 rss: 74Mb L: 17/40 MS: 1 ChangeBinInt- 00:10:18.633 [2024-10-15 11:09:59.003671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a850a cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.003699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.633 [2024-10-15 11:09:59.003795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:85858585 cdw11:850a8585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.003812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.633 #47 NEW cov: 12428 ft: 15149 corp: 29/836b lim: 40 exec/s: 47 rss: 74Mb L: 22/40 MS: 1 CrossOver- 00:10:18.633 [2024-10-15 11:09:59.074589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a8585 cdw11:85858500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.074616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:18.633 [2024-10-15 11:09:59.074713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00008585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.074730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:18.633 [2024-10-15 11:09:59.074832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:85858585 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.074850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:18.633 [2024-10-15 11:09:59.074948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:85858585 cdw11:85848585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:18.633 [2024-10-15 11:09:59.074965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:18.633 #48 NEW cov: 12428 ft: 15154 corp: 30/873b lim: 40 exec/s: 24 rss: 74Mb L: 37/40 MS: 1 EraseBytes- 00:10:18.633 #48 DONE cov: 12428 ft: 15154 corp: 30/873b lim: 40 exec/s: 24 rss: 74Mb 00:10:18.633 ###### Recommended dictionary. ###### 00:10:18.633 "\000\000\000\000" # Uses: 0 00:10:18.633 "\000+\256W\353\325\266p" # Uses: 0 00:10:18.633 ###### End of recommended dictionary. ###### 00:10:18.633 Done 48 runs in 2 second(s) 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:18.633 11:09:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:10:18.633 [2024-10-15 11:09:59.243128] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:18.633 [2024-10-15 11:09:59.243198] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718611 ] 00:10:18.914 [2024-10-15 11:09:59.422432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.914 [2024-10-15 11:09:59.461207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.914 [2024-10-15 11:09:59.520362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.914 [2024-10-15 11:09:59.536522] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:10:19.173 INFO: Running with entropic power schedule (0xFF, 100). 00:10:19.173 INFO: Seed: 3372546745 00:10:19.173 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:19.173 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:19.173 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:19.173 INFO: A corpus is not provided, starting from an empty corpus 00:10:19.173 #2 INITED exec/s: 0 rss: 66Mb 00:10:19.173 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:19.173 This may also happen if the target rejected all inputs we tried so far 00:10:19.173 [2024-10-15 11:09:59.581683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.173 [2024-10-15 11:09:59.581722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.173 [2024-10-15 11:09:59.581757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.173 [2024-10-15 11:09:59.581775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.173 [2024-10-15 11:09:59.581806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.173 [2024-10-15 11:09:59.581823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.173 [2024-10-15 11:09:59.581853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.173 [2024-10-15 11:09:59.581870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.432 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:10:19.432 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:19.432 #12 NEW cov: 12195 ft: 12194 corp: 2/31b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 5 InsertByte-ShuffleBytes-ChangeBit-CopyPart-InsertRepeatedBytes- 00:10:19.432 [2024-10-15 11:09:59.955886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:09:59.955947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:09:59.956058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:09:59.956082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:09:59.956189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:09:59.956213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:09:59.956325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:09:59.956349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.432 #13 NEW cov: 12308 ft: 12816 corp: 3/61b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:10:19.432 [2024-10-15 11:10:00.026056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:10:00.026096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:10:00.026195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:10:00.026217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:10:00.026317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:10:00.026337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.432 [2024-10-15 11:10:00.026442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.432 [2024-10-15 11:10:00.026462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.432 #14 NEW cov: 12314 ft: 13108 corp: 4/91b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:10:19.691 [2024-10-15 11:10:00.075803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.075835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.075935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.075953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.691 #15 NEW cov: 12399 ft: 13664 corp: 5/106b lim: 35 exec/s: 0 rss: 73Mb L: 15/30 MS: 1 EraseBytes- 00:10:19.691 [2024-10-15 11:10:00.126828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.126858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.126956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.126973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.127083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.127099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.127199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.127218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.691 #16 NEW cov: 12399 ft: 13728 corp: 6/136b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:10:19.691 [2024-10-15 11:10:00.177386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.177412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.177517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.691 [2024-10-15 11:10:00.177537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.691 [2024-10-15 11:10:00.177632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.177652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.692 [2024-10-15 11:10:00.177749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.177770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.692 #17 NEW cov: 12399 ft: 13837 corp: 7/167b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertByte- 00:10:19.692 [2024-10-15 11:10:00.246926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.246955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.692 [2024-10-15 11:10:00.247055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.247073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.692 #18 NEW cov: 12399 ft: 13950 corp: 8/182b lim: 35 exec/s: 0 rss: 73Mb L: 15/31 MS: 1 ChangeBit- 00:10:19.692 [2024-10-15 11:10:00.318024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.318058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.692 [2024-10-15 11:10:00.318152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.318171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.692 [2024-10-15 11:10:00.318275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.318295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.692 [2024-10-15 11:10:00.318385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.692 [2024-10-15 11:10:00.318405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.950 #19 NEW cov: 12399 ft: 14014 corp: 9/212b lim: 35 exec/s: 0 rss: 74Mb L: 30/31 MS: 1 ChangeBit- 00:10:19.950 [2024-10-15 11:10:00.388426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.388465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.950 [2024-10-15 11:10:00.388565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.388586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.950 [2024-10-15 11:10:00.388686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.388706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.950 [2024-10-15 11:10:00.388804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.388824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:19.950 #20 NEW cov: 12399 ft: 14023 corp: 10/243b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 CopyPart- 00:10:19.950 [2024-10-15 11:10:00.458264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.458295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.950 [2024-10-15 11:10:00.458412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.950 [2024-10-15 11:10:00.458434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.950 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:19.951 #21 NEW cov: 12422 ft: 14110 corp: 11/258b lim: 35 exec/s: 0 rss: 74Mb L: 15/31 MS: 1 ChangeByte- 00:10:19.951 [2024-10-15 11:10:00.539101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000bc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.951 [2024-10-15 11:10:00.539131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:19.951 [2024-10-15 11:10:00.539242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.951 [2024-10-15 11:10:00.539261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:19.951 [2024-10-15 11:10:00.539360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:19.951 [2024-10-15 11:10:00.539378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:19.951 #26 NEW cov: 12429 ft: 14353 corp: 12/280b lim: 35 exec/s: 0 rss: 74Mb L: 22/31 MS: 5 ChangeBit-ChangeBit-ChangeByte-InsertByte-CrossOver- 00:10:20.210 [2024-10-15 11:10:00.599804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.599838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.599949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.599968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.600066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.600086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.600184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.600207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.210 #27 NEW cov: 12429 ft: 14413 corp: 13/311b lim: 35 exec/s: 27 rss: 74Mb L: 31/31 MS: 1 ChangeByte- 00:10:20.210 [2024-10-15 11:10:00.669390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.669420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.669514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.669533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.210 #28 NEW cov: 12429 ft: 14565 corp: 14/327b lim: 35 exec/s: 28 rss: 74Mb L: 16/31 MS: 1 InsertByte- 00:10:20.210 [2024-10-15 11:10:00.720257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.720287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.720392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.720414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.720525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.720544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.210 #29 NEW cov: 12429 ft: 14589 corp: 15/353b lim: 35 exec/s: 29 rss: 74Mb L: 26/31 MS: 1 InsertRepeatedBytes- 00:10:20.210 [2024-10-15 11:10:00.791183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.791214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.791321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.791338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.791430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.791449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.210 [2024-10-15 11:10:00.791548] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.210 [2024-10-15 11:10:00.791569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.210 #30 NEW cov: 12429 ft: 14634 corp: 16/383b lim: 35 exec/s: 30 rss: 74Mb L: 30/31 MS: 1 ChangeBinInt- 00:10:20.469 [2024-10-15 11:10:00.861407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.861438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.861539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.861555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.861654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.861675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.861769] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.861791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.469 #31 NEW cov: 12429 ft: 14668 corp: 17/413b lim: 35 exec/s: 31 rss: 74Mb L: 30/31 MS: 1 ChangeBit- 00:10:20.469 [2024-10-15 11:10:00.911661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000bc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.911688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.911789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000e6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.911808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.911910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000e6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.911928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.912024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.912060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.469 #32 NEW cov: 12429 ft: 14683 corp: 18/446b lim: 35 exec/s: 32 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:10:20.469 [2024-10-15 11:10:00.981754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.981785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.981899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.981915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:00.982013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000072 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:00.982035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.469 #33 NEW cov: 12429 ft: 14744 corp: 19/468b lim: 35 exec/s: 33 rss: 74Mb L: 22/33 MS: 1 InsertRepeatedBytes- 00:10:20.469 [2024-10-15 11:10:01.031874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:01.031902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.469 [2024-10-15 11:10:01.031998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.469 [2024-10-15 11:10:01.032021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.469 #34 NEW cov: 12429 ft: 14747 corp: 20/484b lim: 35 exec/s: 34 rss: 74Mb L: 16/33 MS: 1 EraseBytes- 00:10:20.728 [2024-10-15 11:10:01.103233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.728 [2024-10-15 11:10:01.103264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.728 [2024-10-15 11:10:01.103362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.728 [2024-10-15 11:10:01.103380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.728 [2024-10-15 11:10:01.103477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.728 [2024-10-15 11:10:01.103499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.103593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.103613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.729 #35 NEW cov: 12429 ft: 14756 corp: 21/516b lim: 35 exec/s: 35 rss: 74Mb L: 32/33 MS: 1 CopyPart- 00:10:20.729 [2024-10-15 11:10:01.172669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.172701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.172794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000074 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.172814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.729 #36 NEW cov: 12429 ft: 14772 corp: 22/532b lim: 35 exec/s: 36 rss: 74Mb L: 16/33 MS: 1 ChangeBit- 00:10:20.729 [2024-10-15 11:10:01.223060] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.223090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.223200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.223218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.729 #37 NEW cov: 12429 ft: 14810 corp: 23/547b lim: 35 exec/s: 37 rss: 74Mb L: 15/33 MS: 1 ChangeByte- 00:10:20.729 [2024-10-15 11:10:01.274013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.274045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.274155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.274174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.274270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.274289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.274382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.274406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.729 #38 NEW cov: 12429 ft: 14818 corp: 24/581b lim: 35 exec/s: 38 rss: 74Mb L: 34/34 MS: 1 CMP- DE: "\001\000"- 00:10:20.729 [2024-10-15 11:10:01.344623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.344652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.344747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.344763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.344855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.344872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.729 [2024-10-15 11:10:01.344969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.729 [2024-10-15 11:10:01.344986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.989 #41 NEW cov: 12429 ft: 14870 corp: 25/615b lim: 35 exec/s: 41 rss: 74Mb L: 34/34 MS: 3 InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:10:20.989 [2024-10-15 11:10:01.394538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.394566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.394677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.394696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.394796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.394816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.394910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.394929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.989 #42 NEW cov: 12429 ft: 14933 corp: 26/645b lim: 35 exec/s: 42 rss: 74Mb L: 30/34 MS: 1 ChangeByte- 00:10:20.989 [2024-10-15 11:10:01.445134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.445162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.445276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.445294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.445394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.445415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.445520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.445540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.445635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.445654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:20.989 #43 NEW cov: 12429 ft: 14984 corp: 27/680b lim: 35 exec/s: 43 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:10:20.989 [2024-10-15 11:10:01.494702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.494732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.494833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.494853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.494953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.494972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:20.989 #44 NEW cov: 12429 ft: 15003 corp: 28/701b lim: 35 exec/s: 44 rss: 74Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:10:20.989 [2024-10-15 11:10:01.544455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.544483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:20.989 [2024-10-15 11:10:01.544574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:20.989 [2024-10-15 11:10:01.544594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:20.989 #45 NEW cov: 12429 ft: 15007 corp: 29/721b lim: 35 exec/s: 22 rss: 74Mb L: 20/35 MS: 1 EraseBytes- 00:10:20.989 #45 DONE cov: 12429 ft: 15007 corp: 29/721b lim: 35 exec/s: 22 rss: 74Mb 00:10:20.989 ###### Recommended dictionary. ###### 00:10:20.989 "\001\000" # Uses: 0 00:10:20.989 ###### End of recommended dictionary. ###### 00:10:20.989 Done 45 runs in 2 second(s) 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:21.249 11:10:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:10:21.249 [2024-10-15 11:10:01.733163] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:21.249 [2024-10-15 11:10:01.733233] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718922 ] 00:10:21.508 [2024-10-15 11:10:01.918742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.508 [2024-10-15 11:10:01.957939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.508 [2024-10-15 11:10:02.017066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.508 [2024-10-15 11:10:02.033224] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:10:21.508 INFO: Running with entropic power schedule (0xFF, 100). 00:10:21.508 INFO: Seed: 1575586692 00:10:21.508 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:21.508 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:21.508 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:21.508 INFO: A corpus is not provided, starting from an empty corpus 00:10:21.508 #2 INITED exec/s: 0 rss: 66Mb 00:10:21.508 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:21.508 This may also happen if the target rejected all inputs we tried so far 00:10:22.027 NEW_FUNC[1/701]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:10:22.027 NEW_FUNC[2/701]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:10:22.027 #9 NEW cov: 12065 ft: 12054 corp: 2/8b lim: 35 exec/s: 0 rss: 73Mb L: 7/7 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:22.027 [2024-10-15 11:10:02.443000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.443057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.027 [2024-10-15 11:10:02.443111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.443128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:22.027 [2024-10-15 11:10:02.443161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.443178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:22.027 NEW_FUNC[1/14]: 0x190e0f8 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:10:22.027 NEW_FUNC[2/14]: 0x190e338 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:10:22.027 #19 NEW cov: 12310 ft: 13055 corp: 3/31b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 5 InsertByte-ShuffleBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:10:22.027 [2024-10-15 11:10:02.502873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000360 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.502905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.027 #27 NEW cov: 12316 ft: 13497 corp: 4/40b lim: 35 exec/s: 0 rss: 74Mb L: 9/23 MS: 3 ChangeByte-ChangeByte-CMP- DE: "z\236\001\016Z\256+\000"- 00:10:22.027 [2024-10-15 11:10:02.553019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.553057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.027 #30 NEW cov: 12401 ft: 13677 corp: 5/48b lim: 35 exec/s: 0 rss: 74Mb L: 8/23 MS: 3 InsertByte-CMP-InsertRepeatedBytes- DE: "\000\004"- 00:10:22.027 [2024-10-15 11:10:02.603319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.603351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.027 [2024-10-15 11:10:02.603386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.603402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:22.027 [2024-10-15 11:10:02.603434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.603450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:22.027 [2024-10-15 11:10:02.603481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.027 [2024-10-15 11:10:02.603496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:22.286 #36 NEW cov: 12401 ft: 14251 corp: 6/81b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 CopyPart- 00:10:22.287 [2024-10-15 11:10:02.693563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.693594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.287 [2024-10-15 11:10:02.693644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.693660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:22.287 [2024-10-15 11:10:02.693692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.693708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:22.287 [2024-10-15 11:10:02.693739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.693756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:22.287 [2024-10-15 11:10:02.693789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.693805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:22.287 #37 NEW cov: 12401 ft: 14394 corp: 7/116b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\004"- 00:10:22.287 [2024-10-15 11:10:02.783633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.783664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.287 #38 NEW cov: 12401 ft: 14444 corp: 8/124b lim: 35 exec/s: 0 rss: 74Mb L: 8/35 MS: 1 ChangeBinInt- 00:10:22.287 [2024-10-15 11:10:02.873842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000360 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.287 [2024-10-15 11:10:02.873872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.546 #39 NEW cov: 12401 ft: 14546 corp: 9/135b lim: 35 exec/s: 0 rss: 74Mb L: 11/35 MS: 1 PersAutoDict- DE: "\000\004"- 00:10:22.546 [2024-10-15 11:10:02.964073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.546 [2024-10-15 11:10:02.964104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.546 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:22.546 #40 NEW cov: 12424 ft: 14611 corp: 10/143b lim: 35 exec/s: 0 rss: 74Mb L: 8/35 MS: 1 ShuffleBytes- 00:10:22.546 [2024-10-15 11:10:03.054341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.546 [2024-10-15 11:10:03.054372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.546 #41 NEW cov: 12424 ft: 14678 corp: 11/151b lim: 35 exec/s: 41 rss: 74Mb L: 8/35 MS: 1 ChangeByte- 00:10:22.546 [2024-10-15 11:10:03.114485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.546 [2024-10-15 11:10:03.114516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.805 #42 NEW cov: 12424 ft: 14702 corp: 12/161b lim: 35 exec/s: 42 rss: 74Mb L: 10/35 MS: 1 PersAutoDict- DE: "\000\004"- 00:10:22.805 [2024-10-15 11:10:03.204705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.805 [2024-10-15 11:10:03.204736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.805 #43 NEW cov: 12424 ft: 14793 corp: 13/172b lim: 35 exec/s: 43 rss: 74Mb L: 11/35 MS: 1 ChangeBinInt- 00:10:22.805 [2024-10-15 11:10:03.295080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.805 [2024-10-15 11:10:03.295113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.805 [2024-10-15 11:10:03.295148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.805 [2024-10-15 11:10:03.295164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:22.806 [2024-10-15 11:10:03.295195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.806 [2024-10-15 11:10:03.295212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:22.806 #44 NEW cov: 12424 ft: 14830 corp: 14/197b lim: 35 exec/s: 44 rss: 74Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:10:22.806 [2024-10-15 11:10:03.385297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.806 [2024-10-15 11:10:03.385334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:22.806 [2024-10-15 11:10:03.385369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.806 [2024-10-15 11:10:03.385386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:22.806 [2024-10-15 11:10:03.385417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.806 [2024-10-15 11:10:03.385433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:22.806 #45 NEW cov: 12424 ft: 14877 corp: 15/224b lim: 35 exec/s: 45 rss: 74Mb L: 27/35 MS: 1 CrossOver- 00:10:23.065 [2024-10-15 11:10:03.445346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.065 [2024-10-15 11:10:03.445376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.065 #46 NEW cov: 12424 ft: 14921 corp: 16/235b lim: 35 exec/s: 46 rss: 74Mb L: 11/35 MS: 1 ChangeBit- 00:10:23.065 [2024-10-15 11:10:03.535607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000047a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.065 [2024-10-15 11:10:03.535638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.065 #47 NEW cov: 12424 ft: 15000 corp: 17/243b lim: 35 exec/s: 47 rss: 74Mb L: 8/35 MS: 1 PersAutoDict- DE: "z\236\001\016Z\256+\000"- 00:10:23.065 [2024-10-15 11:10:03.585684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000039e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.065 [2024-10-15 11:10:03.585714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.065 #48 NEW cov: 12424 ft: 15017 corp: 18/254b lim: 35 exec/s: 48 rss: 74Mb L: 11/35 MS: 1 ShuffleBytes- 00:10:23.065 [2024-10-15 11:10:03.645930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.065 [2024-10-15 11:10:03.645960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:23.324 #53 NEW cov: 12424 ft: 15146 corp: 19/273b lim: 35 exec/s: 53 rss: 75Mb L: 19/35 MS: 5 EraseBytes-ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:10:23.324 [2024-10-15 11:10:03.736147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.324 [2024-10-15 11:10:03.736181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.324 #54 NEW cov: 12424 ft: 15218 corp: 20/284b lim: 35 exec/s: 54 rss: 75Mb L: 11/35 MS: 1 ChangeByte- 00:10:23.324 [2024-10-15 11:10:03.826375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.324 [2024-10-15 11:10:03.826407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.324 #55 NEW cov: 12424 ft: 15225 corp: 21/293b lim: 35 exec/s: 55 rss: 75Mb L: 9/35 MS: 1 InsertByte- 00:10:23.324 [2024-10-15 11:10:03.876468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.324 [2024-10-15 11:10:03.876499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.324 #56 NEW cov: 12424 ft: 15239 corp: 22/304b lim: 35 exec/s: 56 rss: 75Mb L: 11/35 MS: 1 ShuffleBytes- 00:10:23.584 [2024-10-15 11:10:03.966878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:03.966911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:03.966962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:03.966979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:03.967010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:03.967032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:03.967064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:03.967080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:23.584 #57 NEW cov: 12424 ft: 15247 corp: 23/335b lim: 35 exec/s: 57 rss: 75Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:10:23.584 [2024-10-15 11:10:04.057113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000541 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:04.057144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:04.057180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:04.057196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:04.057227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:04.057243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:23.584 [2024-10-15 11:10:04.057274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:23.584 [2024-10-15 11:10:04.057290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:23.584 #58 NEW cov: 12424 ft: 15290 corp: 24/364b lim: 35 exec/s: 29 rss: 75Mb L: 29/35 MS: 1 PersAutoDict- DE: "\000\004"- 00:10:23.584 #58 DONE cov: 12424 ft: 15290 corp: 24/364b lim: 35 exec/s: 29 rss: 75Mb 00:10:23.584 ###### Recommended dictionary. ###### 00:10:23.584 "z\236\001\016Z\256+\000" # Uses: 1 00:10:23.584 "\000\004" # Uses: 4 00:10:23.584 ###### End of recommended dictionary. ###### 00:10:23.584 Done 58 runs in 2 second(s) 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:23.843 11:10:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:10:23.843 [2024-10-15 11:10:04.277830] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:23.843 [2024-10-15 11:10:04.277901] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719273 ] 00:10:23.843 [2024-10-15 11:10:04.459088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.102 [2024-10-15 11:10:04.498769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.102 [2024-10-15 11:10:04.558022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.102 [2024-10-15 11:10:04.574180] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:10:24.102 INFO: Running with entropic power schedule (0xFF, 100). 00:10:24.102 INFO: Seed: 4116576216 00:10:24.102 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:24.102 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:24.102 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:24.102 INFO: A corpus is not provided, starting from an empty corpus 00:10:24.102 #2 INITED exec/s: 0 rss: 66Mb 00:10:24.102 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:24.102 This may also happen if the target rejected all inputs we tried so far 00:10:24.102 [2024-10-15 11:10:04.629743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.102 [2024-10-15 11:10:04.629776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.102 [2024-10-15 11:10:04.629852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.102 [2024-10-15 11:10:04.629869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.361 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:10:24.361 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:24.361 #10 NEW cov: 12287 ft: 12286 corp: 2/48b lim: 105 exec/s: 0 rss: 73Mb L: 47/47 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:10:24.361 [2024-10-15 11:10:04.970578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.361 [2024-10-15 11:10:04.970628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.361 [2024-10-15 11:10:04.970706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073696116735 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.361 [2024-10-15 11:10:04.970729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 #11 NEW cov: 12400 ft: 12704 corp: 3/95b lim: 105 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 ChangeByte- 00:10:24.620 [2024-10-15 11:10:05.030771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.030803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.030843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.030860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.030919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.030933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:24.620 #12 NEW cov: 12406 ft: 13381 corp: 4/177b lim: 105 exec/s: 0 rss: 74Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:10:24.620 [2024-10-15 11:10:05.070709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.070738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.070778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.070795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 #13 NEW cov: 12491 ft: 13774 corp: 5/225b lim: 105 exec/s: 0 rss: 74Mb L: 48/82 MS: 1 InsertByte- 00:10:24.620 [2024-10-15 11:10:05.110862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.110891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.110944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.110960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 #14 NEW cov: 12491 ft: 13870 corp: 6/278b lim: 105 exec/s: 0 rss: 74Mb L: 53/82 MS: 1 EraseBytes- 00:10:24.620 [2024-10-15 11:10:05.171036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.171064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.171104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.171120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 #15 NEW cov: 12491 ft: 13980 corp: 7/339b lim: 105 exec/s: 0 rss: 74Mb L: 61/82 MS: 1 CMP- DE: "\001+\256`K\2620J"- 00:10:24.620 [2024-10-15 11:10:05.231425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.231457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.231501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.231517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.231588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.231603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:24.620 [2024-10-15 11:10:05.231662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.620 [2024-10-15 11:10:05.231678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:24.879 #18 NEW cov: 12491 ft: 14556 corp: 8/436b lim: 105 exec/s: 0 rss: 74Mb L: 97/97 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:10:24.879 [2024-10-15 11:10:05.271271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.271300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.271351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.271368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.879 #19 NEW cov: 12491 ft: 14624 corp: 9/479b lim: 105 exec/s: 0 rss: 74Mb L: 43/97 MS: 1 EraseBytes- 00:10:24.879 [2024-10-15 11:10:05.331696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.331724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.331775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.331791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.331845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.331861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.331918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.331932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:24.879 #20 NEW cov: 12491 ft: 14643 corp: 10/576b lim: 105 exec/s: 0 rss: 74Mb L: 97/97 MS: 1 ShuffleBytes- 00:10:24.879 [2024-10-15 11:10:05.391623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.391650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.391690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.391709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.879 #21 NEW cov: 12491 ft: 14743 corp: 11/637b lim: 105 exec/s: 0 rss: 74Mb L: 61/97 MS: 1 PersAutoDict- DE: "\001+\256`K\2620J"- 00:10:24.879 [2024-10-15 11:10:05.451676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.451705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.879 #22 NEW cov: 12491 ft: 15155 corp: 12/673b lim: 105 exec/s: 0 rss: 74Mb L: 36/97 MS: 1 EraseBytes- 00:10:24.879 [2024-10-15 11:10:05.492152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.879 [2024-10-15 11:10:05.492179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.879 [2024-10-15 11:10:05.492233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709289471 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.880 [2024-10-15 11:10:05.492249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.880 [2024-10-15 11:10:05.492324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.880 [2024-10-15 11:10:05.492340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:24.880 [2024-10-15 11:10:05.492399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:24.880 [2024-10-15 11:10:05.492415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.139 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:25.139 #23 NEW cov: 12514 ft: 15195 corp: 13/770b lim: 105 exec/s: 0 rss: 74Mb L: 97/97 MS: 1 ChangeBit- 00:10:25.139 [2024-10-15 11:10:05.531994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.532022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.139 [2024-10-15 11:10:05.532067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.532083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.139 #24 NEW cov: 12514 ft: 15244 corp: 14/823b lim: 105 exec/s: 0 rss: 74Mb L: 53/97 MS: 1 ChangeBit- 00:10:25.139 [2024-10-15 11:10:05.572158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.572185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.139 [2024-10-15 11:10:05.572244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.572262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.139 #25 NEW cov: 12514 ft: 15290 corp: 15/884b lim: 105 exec/s: 0 rss: 74Mb L: 61/97 MS: 1 PersAutoDict- DE: "\001+\256`K\2620J"- 00:10:25.139 [2024-10-15 11:10:05.612253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:512 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.612283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.139 [2024-10-15 11:10:05.612340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.612356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.139 #26 NEW cov: 12514 ft: 15329 corp: 16/937b lim: 105 exec/s: 26 rss: 74Mb L: 53/97 MS: 1 EraseBytes- 00:10:25.139 [2024-10-15 11:10:05.672410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.672438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.139 [2024-10-15 11:10:05.672477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.672494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.139 #27 NEW cov: 12514 ft: 15335 corp: 17/998b lim: 105 exec/s: 27 rss: 74Mb L: 61/97 MS: 1 ChangeASCIIInt- 00:10:25.139 [2024-10-15 11:10:05.732600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.732628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.139 [2024-10-15 11:10:05.732669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.139 [2024-10-15 11:10:05.732685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 #28 NEW cov: 12514 ft: 15345 corp: 18/1059b lim: 105 exec/s: 28 rss: 74Mb L: 61/97 MS: 1 ChangeBinInt- 00:10:25.399 [2024-10-15 11:10:05.792912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.792940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.792980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.792995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.793072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.793089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.399 #29 NEW cov: 12514 ft: 15377 corp: 19/1131b lim: 105 exec/s: 29 rss: 74Mb L: 72/97 MS: 1 EraseBytes- 00:10:25.399 [2024-10-15 11:10:05.833139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.833168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.833226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.833241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.833302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.833319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.833375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.833390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.399 #30 NEW cov: 12514 ft: 15398 corp: 20/1224b lim: 105 exec/s: 30 rss: 75Mb L: 93/97 MS: 1 InsertRepeatedBytes- 00:10:25.399 [2024-10-15 11:10:05.893453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.893482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.893541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.893556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.893612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.893628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.893684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.893699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.893756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.893771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:25.399 #31 NEW cov: 12514 ft: 15530 corp: 21/1329b lim: 105 exec/s: 31 rss: 75Mb L: 105/105 MS: 1 PersAutoDict- DE: "\001+\256`K\2620J"- 00:10:25.399 [2024-10-15 11:10:05.933301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.933329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.933379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.933396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.933453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.933469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.399 #32 NEW cov: 12514 ft: 15558 corp: 22/1393b lim: 105 exec/s: 32 rss: 75Mb L: 64/105 MS: 1 CrossOver- 00:10:25.399 [2024-10-15 11:10:05.993386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.993419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.399 [2024-10-15 11:10:05.993480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.399 [2024-10-15 11:10:05.993498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.399 #33 NEW cov: 12514 ft: 15595 corp: 23/1452b lim: 105 exec/s: 33 rss: 75Mb L: 59/105 MS: 1 EraseBytes- 00:10:25.659 [2024-10-15 11:10:06.033872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.033901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.033963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.033980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.034042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.034059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.034116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.034132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.034191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.034207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:25.659 #34 NEW cov: 12514 ft: 15645 corp: 24/1557b lim: 105 exec/s: 34 rss: 75Mb L: 105/105 MS: 1 CrossOver- 00:10:25.659 [2024-10-15 11:10:06.073522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.073552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.073607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.073624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.659 #35 NEW cov: 12514 ft: 15723 corp: 25/1604b lim: 105 exec/s: 35 rss: 75Mb L: 47/105 MS: 1 ChangeBit- 00:10:25.659 [2024-10-15 11:10:06.113936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.113965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.114041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18391574978274263039 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.114059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.114115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.114135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.114192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.114209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.659 #36 NEW cov: 12514 ft: 15727 corp: 26/1702b lim: 105 exec/s: 36 rss: 75Mb L: 98/105 MS: 1 InsertByte- 00:10:25.659 [2024-10-15 11:10:06.154042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13835058055282163711 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.154071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.154150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.154166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.154223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.154239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.154299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.154315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.659 #37 NEW cov: 12514 ft: 15779 corp: 27/1795b lim: 105 exec/s: 37 rss: 75Mb L: 93/105 MS: 1 ChangeBit- 00:10:25.659 [2024-10-15 11:10:06.214245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.214275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.214343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.214360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.214415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.214432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.659 [2024-10-15 11:10:06.214491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.214507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.659 #38 NEW cov: 12514 ft: 15801 corp: 28/1890b lim: 105 exec/s: 38 rss: 75Mb L: 95/105 MS: 1 CrossOver- 00:10:25.659 [2024-10-15 11:10:06.274139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.659 [2024-10-15 11:10:06.274167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.660 [2024-10-15 11:10:06.274207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.660 [2024-10-15 11:10:06.274226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.918 #39 NEW cov: 12514 ft: 15826 corp: 29/1944b lim: 105 exec/s: 39 rss: 75Mb L: 54/105 MS: 1 InsertByte- 00:10:25.918 [2024-10-15 11:10:06.334319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:512 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.334347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.334388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.334405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.918 #40 NEW cov: 12514 ft: 15833 corp: 30/1997b lim: 105 exec/s: 40 rss: 75Mb L: 53/105 MS: 1 ShuffleBytes- 00:10:25.918 [2024-10-15 11:10:06.394427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.394454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.394496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551586 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.394512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.918 #41 NEW cov: 12514 ft: 15842 corp: 31/2044b lim: 105 exec/s: 41 rss: 75Mb L: 47/105 MS: 1 ChangeByte- 00:10:25.918 [2024-10-15 11:10:06.454908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5454475193104445024 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.454936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.454992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.455008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.455070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.455102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.455161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.455176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.918 #42 NEW cov: 12514 ft: 15868 corp: 32/2137b lim: 105 exec/s: 42 rss: 75Mb L: 93/105 MS: 1 PersAutoDict- DE: "\001+\256`K\2620J"- 00:10:25.918 [2024-10-15 11:10:06.494984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.495011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.495076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.495096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.495155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.495171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.918 [2024-10-15 11:10:06.495230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.918 [2024-10-15 11:10:06.495246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.918 #43 NEW cov: 12514 ft: 15881 corp: 33/2232b lim: 105 exec/s: 43 rss: 75Mb L: 95/105 MS: 1 PersAutoDict- DE: "\001+\256`K\2620J"- 00:10:26.178 [2024-10-15 11:10:06.555168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551407 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.555196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.178 [2024-10-15 11:10:06.555257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744070672875519 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.555273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.178 [2024-10-15 11:10:06.555330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.555346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:26.178 [2024-10-15 11:10:06.555419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.555435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:26.178 #44 NEW cov: 12514 ft: 15890 corp: 34/2328b lim: 105 exec/s: 44 rss: 75Mb L: 96/105 MS: 1 InsertByte- 00:10:26.178 [2024-10-15 11:10:06.615186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.615213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.178 [2024-10-15 11:10:06.615278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.615294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.178 [2024-10-15 11:10:06.615352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:19379 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.178 [2024-10-15 11:10:06.615368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:26.178 #45 NEW cov: 12514 ft: 15898 corp: 35/2394b lim: 105 exec/s: 22 rss: 75Mb L: 66/105 MS: 1 CrossOver- 00:10:26.178 #45 DONE cov: 12514 ft: 15898 corp: 35/2394b lim: 105 exec/s: 22 rss: 75Mb 00:10:26.178 ###### Recommended dictionary. ###### 00:10:26.178 "\001+\256`K\2620J" # Uses: 5 00:10:26.178 ###### End of recommended dictionary. ###### 00:10:26.178 Done 45 runs in 2 second(s) 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:26.178 11:10:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:10:26.178 [2024-10-15 11:10:06.785417] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:26.178 [2024-10-15 11:10:06.785502] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719626 ] 00:10:26.437 [2024-10-15 11:10:06.969227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.437 [2024-10-15 11:10:07.007892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.437 [2024-10-15 11:10:07.066880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.697 [2024-10-15 11:10:07.083024] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:10:26.697 INFO: Running with entropic power schedule (0xFF, 100). 00:10:26.697 INFO: Seed: 2329604318 00:10:26.697 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:26.697 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:26.697 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:26.697 INFO: A corpus is not provided, starting from an empty corpus 00:10:26.697 #2 INITED exec/s: 0 rss: 66Mb 00:10:26.697 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:26.697 This may also happen if the target rejected all inputs we tried so far 00:10:26.697 [2024-10-15 11:10:07.138401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521602796649853 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:26.697 [2024-10-15 11:10:07.138433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.956 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:10:26.956 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:26.956 #5 NEW cov: 12308 ft: 12304 corp: 2/41b lim: 120 exec/s: 0 rss: 73Mb L: 40/40 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:10:26.956 [2024-10-15 11:10:07.479314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521602796649853 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:26.956 [2024-10-15 11:10:07.479356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.956 #11 NEW cov: 12421 ft: 12885 corp: 3/69b lim: 120 exec/s: 0 rss: 73Mb L: 28/40 MS: 1 EraseBytes- 00:10:26.956 [2024-10-15 11:10:07.539446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:26.956 [2024-10-15 11:10:07.539476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.956 #12 NEW cov: 12427 ft: 13197 corp: 4/113b lim: 120 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 CMP- DE: "\016\000\000\000"- 00:10:26.956 [2024-10-15 11:10:07.579529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521602796649853 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:26.956 [2024-10-15 11:10:07.579558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.215 #13 NEW cov: 12512 ft: 13527 corp: 5/141b lim: 120 exec/s: 0 rss: 74Mb L: 28/44 MS: 1 ChangeByte- 00:10:27.215 [2024-10-15 11:10:07.639664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.215 [2024-10-15 11:10:07.639693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.215 #18 NEW cov: 12512 ft: 13651 corp: 6/175b lim: 120 exec/s: 0 rss: 74Mb L: 34/44 MS: 5 EraseBytes-PersAutoDict-ChangeByte-ChangeBinInt-InsertRepeatedBytes- DE: "\016\000\000\000"- 00:10:27.215 [2024-10-15 11:10:07.699824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521602796649853 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.215 [2024-10-15 11:10:07.699853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.215 #19 NEW cov: 12512 ft: 13768 corp: 7/203b lim: 120 exec/s: 0 rss: 74Mb L: 28/44 MS: 1 ChangeBit- 00:10:27.215 [2024-10-15 11:10:07.739945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.215 [2024-10-15 11:10:07.739974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.215 #20 NEW cov: 12512 ft: 13879 corp: 8/231b lim: 120 exec/s: 0 rss: 74Mb L: 28/44 MS: 1 ChangeBinInt- 00:10:27.215 [2024-10-15 11:10:07.800124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.215 [2024-10-15 11:10:07.800153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.215 #21 NEW cov: 12512 ft: 13998 corp: 9/259b lim: 120 exec/s: 0 rss: 74Mb L: 28/44 MS: 1 ShuffleBytes- 00:10:27.474 [2024-10-15 11:10:07.860788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.860818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.860870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.860887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.860945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.860960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.861012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.861031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.474 #22 NEW cov: 12512 ft: 14902 corp: 10/368b lim: 120 exec/s: 0 rss: 74Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:10:27.474 [2024-10-15 11:10:07.920906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.920935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.920986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.921003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.921061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.921077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.474 [2024-10-15 11:10:07.921133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.921149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.474 #23 NEW cov: 12512 ft: 14934 corp: 11/477b lim: 120 exec/s: 0 rss: 74Mb L: 109/109 MS: 1 ShuffleBytes- 00:10:27.474 [2024-10-15 11:10:07.980629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.474 [2024-10-15 11:10:07.980657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.474 #24 NEW cov: 12512 ft: 14963 corp: 12/505b lim: 120 exec/s: 0 rss: 74Mb L: 28/109 MS: 1 ChangeBit- 00:10:27.475 [2024-10-15 11:10:08.020729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521534077173117 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.475 [2024-10-15 11:10:08.020757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.475 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:27.475 #25 NEW cov: 12535 ft: 15051 corp: 13/533b lim: 120 exec/s: 0 rss: 74Mb L: 28/109 MS: 1 ChangeBit- 00:10:27.475 [2024-10-15 11:10:08.080897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.475 [2024-10-15 11:10:08.080925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.475 #26 NEW cov: 12535 ft: 15063 corp: 14/577b lim: 120 exec/s: 0 rss: 74Mb L: 44/109 MS: 1 ChangeBit- 00:10:27.733 [2024-10-15 11:10:08.121014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1099511627534 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.733 [2024-10-15 11:10:08.121049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.733 #27 NEW cov: 12535 ft: 15159 corp: 15/611b lim: 120 exec/s: 27 rss: 75Mb L: 34/109 MS: 1 PersAutoDict- DE: "\016\000\000\000"- 00:10:27.733 [2024-10-15 11:10:08.181203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17654110535139818877 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.733 [2024-10-15 11:10:08.181230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.733 #28 NEW cov: 12535 ft: 15177 corp: 16/655b lim: 120 exec/s: 28 rss: 75Mb L: 44/109 MS: 1 CMP- DE: "\364\377\377\377"- 00:10:27.734 [2024-10-15 11:10:08.241331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.734 [2024-10-15 11:10:08.241358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.734 #39 NEW cov: 12535 ft: 15187 corp: 17/683b lim: 120 exec/s: 39 rss: 75Mb L: 28/109 MS: 1 ChangeBit- 00:10:27.734 [2024-10-15 11:10:08.281446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.734 [2024-10-15 11:10:08.281474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.734 #40 NEW cov: 12535 ft: 15209 corp: 18/711b lim: 120 exec/s: 40 rss: 75Mb L: 28/109 MS: 1 EraseBytes- 00:10:27.734 [2024-10-15 11:10:08.321588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1099511627534 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.734 [2024-10-15 11:10:08.321615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 #41 NEW cov: 12535 ft: 15222 corp: 19/745b lim: 120 exec/s: 41 rss: 75Mb L: 34/109 MS: 1 CMP- DE: "\002\000"- 00:10:27.993 [2024-10-15 11:10:08.382239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.382267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 [2024-10-15 11:10:08.382318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4051049681650202680 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.382335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.993 [2024-10-15 11:10:08.382390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.382407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.993 [2024-10-15 11:10:08.382461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4051049678932293688 len:14393 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.382476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.993 #42 NEW cov: 12535 ft: 15267 corp: 20/855b lim: 120 exec/s: 42 rss: 75Mb L: 110/110 MS: 1 InsertByte- 00:10:27.993 [2024-10-15 11:10:08.421996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9009589030521503101 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.422022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 [2024-10-15 11:10:08.422071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584125 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.422088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.993 #43 NEW cov: 12535 ft: 15620 corp: 21/911b lim: 120 exec/s: 43 rss: 75Mb L: 56/110 MS: 1 CopyPart- 00:10:27.993 [2024-10-15 11:10:08.462001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.462032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 #44 NEW cov: 12535 ft: 15635 corp: 22/939b lim: 120 exec/s: 44 rss: 75Mb L: 28/110 MS: 1 PersAutoDict- DE: "\364\377\377\377"- 00:10:27.993 [2024-10-15 11:10:08.502238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9009589030521503101 len:32012 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.502266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 [2024-10-15 11:10:08.502304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584125 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.502320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.993 #45 NEW cov: 12535 ft: 15659 corp: 23/995b lim: 120 exec/s: 45 rss: 75Mb L: 56/110 MS: 1 ChangeByte- 00:10:27.993 [2024-10-15 11:10:08.562238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.562267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.993 #46 NEW cov: 12535 ft: 15688 corp: 24/1024b lim: 120 exec/s: 46 rss: 75Mb L: 29/110 MS: 1 InsertByte- 00:10:27.993 [2024-10-15 11:10:08.622783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:27.993 [2024-10-15 11:10:08.622812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.252 [2024-10-15 11:10:08.622858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584061 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.252 [2024-10-15 11:10:08.622875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.252 [2024-10-15 11:10:08.622935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9042521604759567741 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.252 [2024-10-15 11:10:08.622951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.252 #47 NEW cov: 12535 ft: 16015 corp: 25/1115b lim: 120 exec/s: 47 rss: 75Mb L: 91/110 MS: 1 CrossOver- 00:10:28.252 [2024-10-15 11:10:08.662878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.252 [2024-10-15 11:10:08.662906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.252 [2024-10-15 11:10:08.662944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584061 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.252 [2024-10-15 11:10:08.662961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.252 [2024-10-15 11:10:08.663019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9042521604759567741 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.252 [2024-10-15 11:10:08.663041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.252 #48 NEW cov: 12535 ft: 16032 corp: 26/1206b lim: 120 exec/s: 48 rss: 75Mb L: 91/110 MS: 1 ChangeByte- 00:10:28.252 [2024-10-15 11:10:08.723191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.723220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.253 [2024-10-15 11:10:08.723279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584061 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.723294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.253 [2024-10-15 11:10:08.723350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9042521604759567741 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.723365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.253 [2024-10-15 11:10:08.723423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:9042521604759584061 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.723438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.253 #49 NEW cov: 12535 ft: 16044 corp: 27/1319b lim: 120 exec/s: 49 rss: 75Mb L: 113/113 MS: 1 InsertRepeatedBytes- 00:10:28.253 [2024-10-15 11:10:08.762820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.762849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.253 #50 NEW cov: 12535 ft: 16082 corp: 28/1347b lim: 120 exec/s: 50 rss: 75Mb L: 28/113 MS: 1 ChangeBinInt- 00:10:28.253 [2024-10-15 11:10:08.823121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.823149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.253 [2024-10-15 11:10:08.823188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604760239485 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.823204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.253 #51 NEW cov: 12535 ft: 16095 corp: 29/1398b lim: 120 exec/s: 51 rss: 75Mb L: 51/113 MS: 1 CopyPart- 00:10:28.253 [2024-10-15 11:10:08.863275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521602796619901 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.863303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.253 [2024-10-15 11:10:08.863341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584125 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.253 [2024-10-15 11:10:08.863357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.512 #52 NEW cov: 12535 ft: 16116 corp: 30/1454b lim: 120 exec/s: 52 rss: 75Mb L: 56/113 MS: 1 CrossOver- 00:10:28.512 [2024-10-15 11:10:08.903413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9009589030521503101 len:32012 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.512 [2024-10-15 11:10:08.903440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.513 [2024-10-15 11:10:08.903478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584125 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:08.903498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.513 #53 NEW cov: 12535 ft: 16119 corp: 31/1510b lim: 120 exec/s: 53 rss: 75Mb L: 56/113 MS: 1 ShuffleBytes- 00:10:28.513 [2024-10-15 11:10:08.963406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9045336352563756413 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:08.963434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.513 #54 NEW cov: 12535 ft: 16131 corp: 32/1539b lim: 120 exec/s: 54 rss: 75Mb L: 29/113 MS: 1 ChangeBinInt- 00:10:28.513 [2024-10-15 11:10:09.023549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521534077173117 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.023578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.513 #55 NEW cov: 12535 ft: 16152 corp: 33/1567b lim: 120 exec/s: 55 rss: 75Mb L: 28/113 MS: 1 ChangeBit- 00:10:28.513 [2024-10-15 11:10:09.083715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9042521942099066237 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.083743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.513 #56 NEW cov: 12535 ft: 16160 corp: 34/1596b lim: 120 exec/s: 56 rss: 75Mb L: 29/113 MS: 1 InsertByte- 00:10:28.513 [2024-10-15 11:10:09.124331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9011139904557383037 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.124359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.513 [2024-10-15 11:10:09.124416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9042521604759584061 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.124432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.513 [2024-10-15 11:10:09.124489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:9042521132313165181 len:3966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.124505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.513 [2024-10-15 11:10:09.124559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:9042521329881677181 len:32126 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:28.513 [2024-10-15 11:10:09.124574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.772 #57 NEW cov: 12535 ft: 16198 corp: 35/1713b lim: 120 exec/s: 28 rss: 75Mb L: 117/117 MS: 1 InsertRepeatedBytes- 00:10:28.772 #57 DONE cov: 12535 ft: 16198 corp: 35/1713b lim: 120 exec/s: 28 rss: 75Mb 00:10:28.772 ###### Recommended dictionary. ###### 00:10:28.772 "\016\000\000\000" # Uses: 2 00:10:28.772 "\364\377\377\377" # Uses: 1 00:10:28.772 "\002\000" # Uses: 0 00:10:28.772 ###### End of recommended dictionary. ###### 00:10:28.772 Done 57 runs in 2 second(s) 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:28.772 11:10:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:10:28.772 [2024-10-15 11:10:09.313335] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:28.772 [2024-10-15 11:10:09.313408] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719988 ] 00:10:29.032 [2024-10-15 11:10:09.496012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.032 [2024-10-15 11:10:09.533869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.032 [2024-10-15 11:10:09.592742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.032 [2024-10-15 11:10:09.608891] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:10:29.032 INFO: Running with entropic power schedule (0xFF, 100). 00:10:29.032 INFO: Seed: 559635176 00:10:29.032 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:29.032 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:29.032 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:29.032 INFO: A corpus is not provided, starting from an empty corpus 00:10:29.032 #2 INITED exec/s: 0 rss: 66Mb 00:10:29.032 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:29.032 This may also happen if the target rejected all inputs we tried so far 00:10:29.032 [2024-10-15 11:10:09.654218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.032 [2024-10-15 11:10:09.654248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.550 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:10:29.550 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:29.550 #11 NEW cov: 12247 ft: 12248 corp: 2/32b lim: 100 exec/s: 0 rss: 73Mb L: 31/31 MS: 4 CopyPart-ChangeBit-CopyPart-InsertRepeatedBytes- 00:10:29.550 [2024-10-15 11:10:09.995040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.550 [2024-10-15 11:10:09.995076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.550 #12 NEW cov: 12364 ft: 12943 corp: 3/63b lim: 100 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 ChangeBinInt- 00:10:29.550 [2024-10-15 11:10:10.065201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.550 [2024-10-15 11:10:10.065239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.550 #13 NEW cov: 12370 ft: 13238 corp: 4/94b lim: 100 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 CopyPart- 00:10:29.550 [2024-10-15 11:10:10.105277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.550 [2024-10-15 11:10:10.105312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.550 #14 NEW cov: 12455 ft: 13492 corp: 5/118b lim: 100 exec/s: 0 rss: 74Mb L: 24/31 MS: 1 CrossOver- 00:10:29.550 [2024-10-15 11:10:10.145351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.550 [2024-10-15 11:10:10.145379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 #15 NEW cov: 12455 ft: 13646 corp: 6/142b lim: 100 exec/s: 0 rss: 74Mb L: 24/31 MS: 1 ChangeBit- 00:10:29.809 [2024-10-15 11:10:10.205491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.205518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 #16 NEW cov: 12455 ft: 13684 corp: 7/166b lim: 100 exec/s: 0 rss: 74Mb L: 24/31 MS: 1 ChangeBinInt- 00:10:29.809 [2024-10-15 11:10:10.245928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.245954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 [2024-10-15 11:10:10.246023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:29.809 [2024-10-15 11:10:10.246042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:29.809 [2024-10-15 11:10:10.246092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:29.809 [2024-10-15 11:10:10.246107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:29.809 [2024-10-15 11:10:10.246159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:29.809 [2024-10-15 11:10:10.246172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:29.809 #17 NEW cov: 12455 ft: 14105 corp: 8/246b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:10:29.809 [2024-10-15 11:10:10.285729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.285753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 #18 NEW cov: 12455 ft: 14126 corp: 9/266b lim: 100 exec/s: 0 rss: 74Mb L: 20/80 MS: 1 EraseBytes- 00:10:29.809 [2024-10-15 11:10:10.325844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.325871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 #19 NEW cov: 12455 ft: 14215 corp: 10/291b lim: 100 exec/s: 0 rss: 74Mb L: 25/80 MS: 1 InsertByte- 00:10:29.809 [2024-10-15 11:10:10.366001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.366033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.809 #20 NEW cov: 12455 ft: 14252 corp: 11/311b lim: 100 exec/s: 0 rss: 74Mb L: 20/80 MS: 1 ChangeBinInt- 00:10:29.809 [2024-10-15 11:10:10.426145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:29.809 [2024-10-15 11:10:10.426171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.068 #21 NEW cov: 12455 ft: 14310 corp: 12/331b lim: 100 exec/s: 0 rss: 74Mb L: 20/80 MS: 1 ChangeByte- 00:10:30.068 [2024-10-15 11:10:10.466259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.068 [2024-10-15 11:10:10.466285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.068 #22 NEW cov: 12455 ft: 14362 corp: 13/355b lim: 100 exec/s: 0 rss: 74Mb L: 24/80 MS: 1 ChangeBit- 00:10:30.068 [2024-10-15 11:10:10.526772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.068 [2024-10-15 11:10:10.526798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.526845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.068 [2024-10-15 11:10:10.526858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.526911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.068 [2024-10-15 11:10:10.526925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.526979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.068 [2024-10-15 11:10:10.526993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.068 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:30.068 #23 NEW cov: 12478 ft: 14413 corp: 14/435b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 ChangeBit- 00:10:30.068 [2024-10-15 11:10:10.586924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.068 [2024-10-15 11:10:10.586951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.586998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.068 [2024-10-15 11:10:10.587012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.587063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.068 [2024-10-15 11:10:10.587078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.587131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.068 [2024-10-15 11:10:10.587145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.068 #24 NEW cov: 12478 ft: 14428 corp: 15/515b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 ShuffleBytes- 00:10:30.068 [2024-10-15 11:10:10.627033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.068 [2024-10-15 11:10:10.627059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.627111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.068 [2024-10-15 11:10:10.627124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.627174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.068 [2024-10-15 11:10:10.627192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.068 [2024-10-15 11:10:10.627242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.068 [2024-10-15 11:10:10.627256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.068 #25 NEW cov: 12478 ft: 14438 corp: 16/596b lim: 100 exec/s: 25 rss: 74Mb L: 81/81 MS: 1 InsertByte- 00:10:30.068 [2024-10-15 11:10:10.686836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.068 [2024-10-15 11:10:10.686863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 #26 NEW cov: 12478 ft: 14552 corp: 17/616b lim: 100 exec/s: 26 rss: 74Mb L: 20/81 MS: 1 ShuffleBytes- 00:10:30.327 [2024-10-15 11:10:10.747043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.327 [2024-10-15 11:10:10.747069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 #27 NEW cov: 12478 ft: 14600 corp: 18/641b lim: 100 exec/s: 27 rss: 74Mb L: 25/81 MS: 1 InsertByte- 00:10:30.327 [2024-10-15 11:10:10.807527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.327 [2024-10-15 11:10:10.807553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 [2024-10-15 11:10:10.807600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.327 [2024-10-15 11:10:10.807615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.327 [2024-10-15 11:10:10.807665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.327 [2024-10-15 11:10:10.807679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.327 [2024-10-15 11:10:10.807728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.327 [2024-10-15 11:10:10.807743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.327 #28 NEW cov: 12478 ft: 14634 corp: 19/721b lim: 100 exec/s: 28 rss: 75Mb L: 80/81 MS: 1 ChangeBit- 00:10:30.327 [2024-10-15 11:10:10.867345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.327 [2024-10-15 11:10:10.867371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 #34 NEW cov: 12478 ft: 14720 corp: 20/753b lim: 100 exec/s: 34 rss: 75Mb L: 32/81 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\177"- 00:10:30.327 [2024-10-15 11:10:10.907512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.327 [2024-10-15 11:10:10.907539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 #35 NEW cov: 12478 ft: 14731 corp: 21/773b lim: 100 exec/s: 35 rss: 75Mb L: 20/81 MS: 1 ChangeBinInt- 00:10:30.327 [2024-10-15 11:10:10.947801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.327 [2024-10-15 11:10:10.947827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.327 [2024-10-15 11:10:10.947862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.327 [2024-10-15 11:10:10.947877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.327 [2024-10-15 11:10:10.947928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.327 [2024-10-15 11:10:10.947946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.586 #36 NEW cov: 12478 ft: 15079 corp: 22/850b lim: 100 exec/s: 36 rss: 75Mb L: 77/81 MS: 1 InsertRepeatedBytes- 00:10:30.586 [2024-10-15 11:10:11.007777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.586 [2024-10-15 11:10:11.007802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.586 #37 NEW cov: 12478 ft: 15124 corp: 23/874b lim: 100 exec/s: 37 rss: 75Mb L: 24/81 MS: 1 ChangeBit- 00:10:30.586 [2024-10-15 11:10:11.067921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.586 [2024-10-15 11:10:11.067947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.586 #38 NEW cov: 12478 ft: 15157 corp: 24/905b lim: 100 exec/s: 38 rss: 75Mb L: 31/81 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\177"- 00:10:30.586 [2024-10-15 11:10:11.128287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.586 [2024-10-15 11:10:11.128313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.586 [2024-10-15 11:10:11.128357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.586 [2024-10-15 11:10:11.128372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.586 [2024-10-15 11:10:11.128423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.586 [2024-10-15 11:10:11.128436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.586 #39 NEW cov: 12478 ft: 15173 corp: 25/981b lim: 100 exec/s: 39 rss: 75Mb L: 76/81 MS: 1 InsertRepeatedBytes- 00:10:30.586 [2024-10-15 11:10:11.168162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.586 [2024-10-15 11:10:11.168189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.586 #40 NEW cov: 12478 ft: 15257 corp: 26/1006b lim: 100 exec/s: 40 rss: 75Mb L: 25/81 MS: 1 ChangeByte- 00:10:30.843 [2024-10-15 11:10:11.228330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.228355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.843 #41 NEW cov: 12478 ft: 15271 corp: 27/1027b lim: 100 exec/s: 41 rss: 75Mb L: 21/81 MS: 1 InsertRepeatedBytes- 00:10:30.843 [2024-10-15 11:10:11.268793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.268819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.268888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.843 [2024-10-15 11:10:11.268903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.268953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.843 [2024-10-15 11:10:11.268969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.269021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.843 [2024-10-15 11:10:11.269042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.843 #42 NEW cov: 12478 ft: 15300 corp: 28/1107b lim: 100 exec/s: 42 rss: 75Mb L: 80/81 MS: 1 ChangeBinInt- 00:10:30.843 [2024-10-15 11:10:11.308565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.308590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.843 #43 NEW cov: 12478 ft: 15320 corp: 29/1127b lim: 100 exec/s: 43 rss: 75Mb L: 20/81 MS: 1 ChangeByte- 00:10:30.843 [2024-10-15 11:10:11.368808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.368833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.368889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.843 [2024-10-15 11:10:11.368905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.843 #48 NEW cov: 12478 ft: 15578 corp: 30/1167b lim: 100 exec/s: 48 rss: 75Mb L: 40/81 MS: 5 CrossOver-InsertByte-PersAutoDict-PersAutoDict-CrossOver- DE: "\377\377\377\377\377\377\377\177"-"\377\377\377\377\377\377\377\177"- 00:10:30.843 [2024-10-15 11:10:11.409181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.409205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.409258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:30.843 [2024-10-15 11:10:11.409270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.409322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:30.843 [2024-10-15 11:10:11.409335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.843 [2024-10-15 11:10:11.409389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:30.843 [2024-10-15 11:10:11.409403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.843 #49 NEW cov: 12478 ft: 15589 corp: 31/1266b lim: 100 exec/s: 49 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:10:30.843 [2024-10-15 11:10:11.448967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:30.843 [2024-10-15 11:10:11.448993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.102 #50 NEW cov: 12478 ft: 15592 corp: 32/1297b lim: 100 exec/s: 50 rss: 75Mb L: 31/99 MS: 1 ChangeByte- 00:10:31.102 [2024-10-15 11:10:11.509492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:31.102 [2024-10-15 11:10:11.509517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.102 [2024-10-15 11:10:11.509567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:31.102 [2024-10-15 11:10:11.509582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.102 [2024-10-15 11:10:11.509633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:31.102 [2024-10-15 11:10:11.509647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.102 [2024-10-15 11:10:11.509699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:31.102 [2024-10-15 11:10:11.509713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.102 #51 NEW cov: 12478 ft: 15597 corp: 33/1382b lim: 100 exec/s: 51 rss: 75Mb L: 85/99 MS: 1 InsertRepeatedBytes- 00:10:31.102 [2024-10-15 11:10:11.549238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:31.102 [2024-10-15 11:10:11.549263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.102 #52 NEW cov: 12478 ft: 15603 corp: 34/1414b lim: 100 exec/s: 52 rss: 75Mb L: 32/99 MS: 1 InsertRepeatedBytes- 00:10:31.102 [2024-10-15 11:10:11.589361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:31.102 [2024-10-15 11:10:11.589387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.102 #53 NEW cov: 12478 ft: 15613 corp: 35/1445b lim: 100 exec/s: 53 rss: 75Mb L: 31/99 MS: 1 ChangeByte- 00:10:31.102 [2024-10-15 11:10:11.649761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:31.102 [2024-10-15 11:10:11.649786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.102 [2024-10-15 11:10:11.649821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:31.102 [2024-10-15 11:10:11.649836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.102 [2024-10-15 11:10:11.649903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:31.102 [2024-10-15 11:10:11.649918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.102 #54 NEW cov: 12478 ft: 15635 corp: 36/1523b lim: 100 exec/s: 27 rss: 75Mb L: 78/99 MS: 1 InsertRepeatedBytes- 00:10:31.102 #54 DONE cov: 12478 ft: 15635 corp: 36/1523b lim: 100 exec/s: 27 rss: 75Mb 00:10:31.102 ###### Recommended dictionary. ###### 00:10:31.102 "\377\377\377\377\377\377\377\177" # Uses: 3 00:10:31.102 ###### End of recommended dictionary. ###### 00:10:31.102 Done 54 runs in 2 second(s) 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:31.361 11:10:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:10:31.361 [2024-10-15 11:10:11.840235] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:31.361 [2024-10-15 11:10:11.840325] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720341 ] 00:10:31.620 [2024-10-15 11:10:12.021054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.620 [2024-10-15 11:10:12.059802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.620 [2024-10-15 11:10:12.118655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.620 [2024-10-15 11:10:12.134805] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:10:31.620 INFO: Running with entropic power schedule (0xFF, 100). 00:10:31.620 INFO: Seed: 3086638651 00:10:31.620 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:31.620 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:31.620 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:31.620 INFO: A corpus is not provided, starting from an empty corpus 00:10:31.620 #2 INITED exec/s: 0 rss: 66Mb 00:10:31.620 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:31.620 This may also happen if the target rejected all inputs we tried so far 00:10:31.620 [2024-10-15 11:10:12.190558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:31.620 [2024-10-15 11:10:12.190590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.620 [2024-10-15 11:10:12.190631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:31.620 [2024-10-15 11:10:12.190647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.620 [2024-10-15 11:10:12.190694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:31.620 [2024-10-15 11:10:12.190709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.620 [2024-10-15 11:10:12.190757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:31.620 [2024-10-15 11:10:12.190772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.620 [2024-10-15 11:10:12.190822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:31.620 [2024-10-15 11:10:12.190838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.189 NEW_FUNC[1/713]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:10:32.189 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:32.189 #8 NEW cov: 12222 ft: 12220 corp: 2/51b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:10:32.189 [2024-10-15 11:10:12.531557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.531596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.531654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.531671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.531723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.531738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.531792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:32.189 [2024-10-15 11:10:12.531807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.531862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:32.189 [2024-10-15 11:10:12.531877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.189 NEW_FUNC[1/1]: 0x1019f58 in posix_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1628 00:10:32.189 #9 NEW cov: 12342 ft: 12760 corp: 3/101b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeBit- 00:10:32.189 [2024-10-15 11:10:12.591621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.591651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.591702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.591719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.591772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.591788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.591841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18764 00:10:32.189 [2024-10-15 11:10:12.591856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.591911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.591926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.189 #10 NEW cov: 12348 ft: 12970 corp: 4/151b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:10:32.189 [2024-10-15 11:10:12.651556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.651586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.651618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.651633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.651685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.651701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.189 #11 NEW cov: 12433 ft: 13635 corp: 5/188b lim: 50 exec/s: 0 rss: 73Mb L: 37/50 MS: 1 EraseBytes- 00:10:32.189 [2024-10-15 11:10:12.691394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15046755950319947984 len:53457 00:10:32.189 [2024-10-15 11:10:12.691421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 #12 NEW cov: 12433 ft: 14076 corp: 6/205b lim: 50 exec/s: 0 rss: 73Mb L: 17/50 MS: 1 InsertRepeatedBytes- 00:10:32.189 [2024-10-15 11:10:12.731981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.732010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.732072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.732089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.732141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.732156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.732209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18764 00:10:32.189 [2024-10-15 11:10:12.732224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.732277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617178089801 len:18762 00:10:32.189 [2024-10-15 11:10:12.732292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.189 #13 NEW cov: 12433 ft: 14265 corp: 7/255b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:10:32.189 [2024-10-15 11:10:12.792235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.792264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.792315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.792330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.792400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.792417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.792470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:32.189 [2024-10-15 11:10:12.792485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.189 [2024-10-15 11:10:12.792538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:32.189 [2024-10-15 11:10:12.792554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.189 #14 NEW cov: 12433 ft: 14354 corp: 8/305b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 CrossOver- 00:10:32.449 [2024-10-15 11:10:12.832300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.832329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.832381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.832397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.832453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.832469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.832522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:32.449 [2024-10-15 11:10:12.832537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.832592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:32.449 [2024-10-15 11:10:12.832608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.449 #15 NEW cov: 12433 ft: 14411 corp: 9/355b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:10:32.449 [2024-10-15 11:10:12.872269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.872297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.872346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.872363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.872416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.872431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.872485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18762 00:10:32.449 [2024-10-15 11:10:12.872500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 #16 NEW cov: 12433 ft: 14446 corp: 10/403b lim: 50 exec/s: 0 rss: 74Mb L: 48/50 MS: 1 EraseBytes- 00:10:32.449 [2024-10-15 11:10:12.912527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.912555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.912610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.912624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.912677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.912691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.912747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:32.449 [2024-10-15 11:10:12.912762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.912815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.912832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.449 #17 NEW cov: 12433 ft: 14475 corp: 11/453b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:10:32.449 [2024-10-15 11:10:12.952499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.952527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.952582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.952598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.952648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832625769531721 len:18762 00:10:32.449 [2024-10-15 11:10:12.952664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:12.952717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:12.952733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 #18 NEW cov: 12433 ft: 14486 corp: 12/495b lim: 50 exec/s: 0 rss: 74Mb L: 42/50 MS: 1 EraseBytes- 00:10:32.449 [2024-10-15 11:10:13.012678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.012706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.012754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.012771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.012824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.012839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.012894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18762 00:10:32.449 [2024-10-15 11:10:13.012909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 #19 NEW cov: 12433 ft: 14536 corp: 13/543b lim: 50 exec/s: 0 rss: 74Mb L: 48/50 MS: 1 ChangeByte- 00:10:32.449 [2024-10-15 11:10:13.072984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.073011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.073073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.073088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.073164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.073180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.073235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:32.449 [2024-10-15 11:10:13.073252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.449 [2024-10-15 11:10:13.073307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5281395567133018441 len:18699 00:10:32.449 [2024-10-15 11:10:13.073322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.709 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:32.709 #20 NEW cov: 12456 ft: 14605 corp: 14/593b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 ChangeBit- 00:10:32.709 [2024-10-15 11:10:13.112828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.112855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.112892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.112907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.112962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.112977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.709 #21 NEW cov: 12456 ft: 14632 corp: 15/630b lim: 50 exec/s: 0 rss: 74Mb L: 37/50 MS: 1 CrossOver- 00:10:32.709 [2024-10-15 11:10:13.173162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.173190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.173238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.173254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.173306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.173321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.173376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18699 00:10:32.709 [2024-10-15 11:10:13.173392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.709 #22 NEW cov: 12456 ft: 14658 corp: 16/670b lim: 50 exec/s: 22 rss: 74Mb L: 40/50 MS: 1 EraseBytes- 00:10:32.709 [2024-10-15 11:10:13.213357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.213384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.213434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3147558215643384064 len:40671 00:10:32.709 [2024-10-15 11:10:13.213450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.213504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832616558840137 len:18762 00:10:32.709 [2024-10-15 11:10:13.213520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.213573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:32.709 [2024-10-15 11:10:13.213587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.213641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.213657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.709 #23 NEW cov: 12456 ft: 14686 corp: 17/720b lim: 50 exec/s: 23 rss: 74Mb L: 50/50 MS: 1 CMP- DE: "\000+\256_\205\236\336$"- 00:10:32.709 [2024-10-15 11:10:13.273393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.273420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.273473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.273488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.273539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.709 [2024-10-15 11:10:13.273554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.273607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18762 00:10:32.709 [2024-10-15 11:10:13.273623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.709 #24 NEW cov: 12456 ft: 14706 corp: 18/762b lim: 50 exec/s: 24 rss: 74Mb L: 42/50 MS: 1 CrossOver- 00:10:32.709 [2024-10-15 11:10:13.333451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9187201948472803199 len:32640 00:10:32.709 [2024-10-15 11:10:13.333479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.709 [2024-10-15 11:10:13.333546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9187201950435737471 len:32640 00:10:32.709 [2024-10-15 11:10:13.333562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.710 [2024-10-15 11:10:13.333619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9187201950435737471 len:32640 00:10:32.710 [2024-10-15 11:10:13.333635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.969 #26 NEW cov: 12456 ft: 14753 corp: 19/792b lim: 50 exec/s: 26 rss: 74Mb L: 30/50 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:32.969 [2024-10-15 11:10:13.373308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.373338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.969 #27 NEW cov: 12456 ft: 14805 corp: 20/806b lim: 50 exec/s: 27 rss: 74Mb L: 14/50 MS: 1 CrossOver- 00:10:32.969 [2024-10-15 11:10:13.413766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832616122632521 len:18762 00:10:32.969 [2024-10-15 11:10:13.413794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.413848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.413864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.413916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:19274 00:10:32.969 [2024-10-15 11:10:13.413931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.413986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.414002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.969 #28 NEW cov: 12456 ft: 14864 corp: 21/849b lim: 50 exec/s: 28 rss: 74Mb L: 43/50 MS: 1 CrossOver- 00:10:32.969 [2024-10-15 11:10:13.453880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15585068564603291977 len:18762 00:10:32.969 [2024-10-15 11:10:13.453907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.453956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.453971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.454032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.454048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.454103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18762 00:10:32.969 [2024-10-15 11:10:13.454118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.969 #29 NEW cov: 12456 ft: 14905 corp: 22/897b lim: 50 exec/s: 29 rss: 74Mb L: 48/50 MS: 1 ChangeByte- 00:10:32.969 [2024-10-15 11:10:13.494156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.494183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.494244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.494259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.494312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.494329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.494380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18764 00:10:32.969 [2024-10-15 11:10:13.494399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.494455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617178089801 len:18762 00:10:32.969 [2024-10-15 11:10:13.494470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.969 #30 NEW cov: 12456 ft: 14939 corp: 23/947b lim: 50 exec/s: 30 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:10:32.969 [2024-10-15 11:10:13.554060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597274 len:18762 00:10:32.969 [2024-10-15 11:10:13.554088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.969 [2024-10-15 11:10:13.554135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.969 [2024-10-15 11:10:13.554151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.970 [2024-10-15 11:10:13.554204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.970 [2024-10-15 11:10:13.554220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.970 #31 NEW cov: 12456 ft: 14953 corp: 24/985b lim: 50 exec/s: 31 rss: 74Mb L: 38/50 MS: 1 InsertByte- 00:10:32.970 [2024-10-15 11:10:13.594180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597099 len:18762 00:10:32.970 [2024-10-15 11:10:13.594209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.970 [2024-10-15 11:10:13.594253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:32.970 [2024-10-15 11:10:13.594269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.970 [2024-10-15 11:10:13.594324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:32.970 [2024-10-15 11:10:13.594340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.229 #32 NEW cov: 12456 ft: 14992 corp: 25/1023b lim: 50 exec/s: 32 rss: 74Mb L: 38/50 MS: 1 InsertByte- 00:10:33.229 [2024-10-15 11:10:13.634507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:9628377299567750751 len:18762 00:10:33.229 [2024-10-15 11:10:13.634535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.229 [2024-10-15 11:10:13.634591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.229 [2024-10-15 11:10:13.634607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.229 [2024-10-15 11:10:13.634660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.229 [2024-10-15 11:10:13.634676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.229 [2024-10-15 11:10:13.634729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:33.229 [2024-10-15 11:10:13.634745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.229 [2024-10-15 11:10:13.634804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5281395567133018441 len:18699 00:10:33.229 [2024-10-15 11:10:13.634819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.230 #33 NEW cov: 12456 ft: 15025 corp: 26/1073b lim: 50 exec/s: 33 rss: 74Mb L: 50/50 MS: 1 PersAutoDict- DE: "\000+\256_\205\236\336$"- 00:10:33.230 [2024-10-15 11:10:13.694716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18690 00:10:33.230 [2024-10-15 11:10:13.694744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.694816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832616474954057 len:18762 00:10:33.230 [2024-10-15 11:10:13.694832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.694888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.694904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.694958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:33.230 [2024-10-15 11:10:13.694974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.695034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:33.230 [2024-10-15 11:10:13.695050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.230 #34 NEW cov: 12456 ft: 15119 corp: 27/1123b lim: 50 exec/s: 34 rss: 74Mb L: 50/50 MS: 1 CMP- DE: "\001\037"- 00:10:33.230 [2024-10-15 11:10:13.734816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.734844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.734902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.734916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.734970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.734985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.735043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133149513 len:18762 00:10:33.230 [2024-10-15 11:10:13.735059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.735112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.735127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.230 #35 NEW cov: 12456 ft: 15136 corp: 28/1173b lim: 50 exec/s: 35 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:10:33.230 [2024-10-15 11:10:13.774928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.774959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.775025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.775047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.775103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617162819913 len:18762 00:10:33.230 [2024-10-15 11:10:13.775119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.775174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18764 00:10:33.230 [2024-10-15 11:10:13.775190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.775246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.775262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.230 #36 NEW cov: 12456 ft: 15186 corp: 29/1223b lim: 50 exec/s: 36 rss: 74Mb L: 50/50 MS: 1 ChangeBit- 00:10:33.230 [2024-10-15 11:10:13.814999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.815032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.815089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.815103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.815156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.815172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.815222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18764 00:10:33.230 [2024-10-15 11:10:13.815237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.815292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280757850388908361 len:18762 00:10:33.230 [2024-10-15 11:10:13.815308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.230 #37 NEW cov: 12456 ft: 15320 corp: 30/1273b lim: 50 exec/s: 37 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:10:33.230 [2024-10-15 11:10:13.855207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.855236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.855295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.230 [2024-10-15 11:10:13.855310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.855366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179609673 len:18762 00:10:33.230 [2024-10-15 11:10:13.855385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.855440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:33.230 [2024-10-15 11:10:13.855456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.230 [2024-10-15 11:10:13.855511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:33.230 [2024-10-15 11:10:13.855526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.490 #38 NEW cov: 12456 ft: 15339 corp: 31/1323b lim: 50 exec/s: 38 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:10:33.490 [2024-10-15 11:10:13.895196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.895225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.895274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.895290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.895344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.895360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.895417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.895433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.490 #39 NEW cov: 12456 ft: 15348 corp: 32/1366b lim: 50 exec/s: 39 rss: 74Mb L: 43/50 MS: 1 CrossOver- 00:10:33.490 [2024-10-15 11:10:13.935126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.935154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.935203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.935219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.935273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18699 00:10:33.490 [2024-10-15 11:10:13.935288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.490 #40 NEW cov: 12456 ft: 15360 corp: 33/1396b lim: 50 exec/s: 40 rss: 75Mb L: 30/50 MS: 1 EraseBytes- 00:10:33.490 [2024-10-15 11:10:13.975354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.975382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.975445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.975461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.975516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:13.975532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:13.975586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3623507954307254603 len:18762 00:10:33.490 [2024-10-15 11:10:13.975603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.490 #41 NEW cov: 12456 ft: 15362 corp: 34/1445b lim: 50 exec/s: 41 rss: 75Mb L: 49/50 MS: 1 InsertByte- 00:10:33.490 [2024-10-15 11:10:14.015447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15585068564603291977 len:18762 00:10:33.490 [2024-10-15 11:10:14.015475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.015527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:14.015543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.015597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:14.015613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.015667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597131 len:18733 00:10:33.490 [2024-10-15 11:10:14.015682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.490 #42 NEW cov: 12456 ft: 15399 corp: 35/1494b lim: 50 exec/s: 42 rss: 75Mb L: 49/50 MS: 1 InsertByte- 00:10:33.490 [2024-10-15 11:10:14.075731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:14.075759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.075829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3147558215643384064 len:40671 00:10:33.490 [2024-10-15 11:10:14.075844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.075897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280834815582095689 len:18762 00:10:33.490 [2024-10-15 11:10:14.075912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.075965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133018441 len:18762 00:10:33.490 [2024-10-15 11:10:14.075980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.490 [2024-10-15 11:10:14.076041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:33.490 [2024-10-15 11:10:14.076057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.490 #43 NEW cov: 12456 ft: 15425 corp: 36/1544b lim: 50 exec/s: 43 rss: 75Mb L: 50/50 MS: 1 ChangeBit- 00:10:33.749 [2024-10-15 11:10:14.135927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.749 [2024-10-15 11:10:14.135955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.749 [2024-10-15 11:10:14.136019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.749 [2024-10-15 11:10:14.136041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.749 [2024-10-15 11:10:14.136097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832625769531721 len:18762 00:10:33.749 [2024-10-15 11:10:14.136113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.749 [2024-10-15 11:10:14.136167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5281395567133149513 len:18764 00:10:33.749 [2024-10-15 11:10:14.136183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.749 [2024-10-15 11:10:14.136237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18762 00:10:33.750 [2024-10-15 11:10:14.136255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.750 #44 NEW cov: 12456 ft: 15444 corp: 37/1594b lim: 50 exec/s: 44 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:10:33.750 [2024-10-15 11:10:14.176011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5280832617179597129 len:18762 00:10:33.750 [2024-10-15 11:10:14.176044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.750 [2024-10-15 11:10:14.176117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5280832617179597129 len:18762 00:10:33.750 [2024-10-15 11:10:14.176133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.750 [2024-10-15 11:10:14.176187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5280832617179597129 len:18762 00:10:33.750 [2024-10-15 11:10:14.176204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.750 [2024-10-15 11:10:14.176255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5418473880791107913 len:18762 00:10:33.750 [2024-10-15 11:10:14.176281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.750 [2024-10-15 11:10:14.176333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:5280832617179597129 len:18699 00:10:33.750 [2024-10-15 11:10:14.176347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:33.750 #45 NEW cov: 12456 ft: 15458 corp: 38/1644b lim: 50 exec/s: 22 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:10:33.750 #45 DONE cov: 12456 ft: 15458 corp: 38/1644b lim: 50 exec/s: 22 rss: 75Mb 00:10:33.750 ###### Recommended dictionary. ###### 00:10:33.750 "\000+\256_\205\236\336$" # Uses: 1 00:10:33.750 "\001\037" # Uses: 0 00:10:33.750 ###### End of recommended dictionary. ###### 00:10:33.750 Done 45 runs in 2 second(s) 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:33.750 11:10:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:10:33.750 [2024-10-15 11:10:14.365257] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:33.750 [2024-10-15 11:10:14.365329] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720700 ] 00:10:34.009 [2024-10-15 11:10:14.546306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.009 [2024-10-15 11:10:14.585485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.269 [2024-10-15 11:10:14.644365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.269 [2024-10-15 11:10:14.660516] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:34.269 INFO: Running with entropic power schedule (0xFF, 100). 00:10:34.269 INFO: Seed: 1317679104 00:10:34.269 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:34.269 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:34.269 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:34.269 INFO: A corpus is not provided, starting from an empty corpus 00:10:34.269 #2 INITED exec/s: 0 rss: 66Mb 00:10:34.269 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:34.269 This may also happen if the target rejected all inputs we tried so far 00:10:34.269 [2024-10-15 11:10:14.705495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.269 [2024-10-15 11:10:14.705531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.269 [2024-10-15 11:10:14.705581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:34.269 [2024-10-15 11:10:14.705600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:34.269 [2024-10-15 11:10:14.705631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:34.269 [2024-10-15 11:10:14.705648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:34.528 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:10:34.528 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:34.528 #15 NEW cov: 12287 ft: 12280 corp: 2/72b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:10:34.528 [2024-10-15 11:10:15.056470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.528 [2024-10-15 11:10:15.056512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.528 [2024-10-15 11:10:15.056562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:34.528 [2024-10-15 11:10:15.056581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:34.528 [2024-10-15 11:10:15.056612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:34.528 [2024-10-15 11:10:15.056629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:34.528 #16 NEW cov: 12400 ft: 12780 corp: 3/143b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 ChangeByte- 00:10:34.528 [2024-10-15 11:10:15.146566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.528 [2024-10-15 11:10:15.146600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.528 [2024-10-15 11:10:15.146649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:34.528 [2024-10-15 11:10:15.146668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:34.528 [2024-10-15 11:10:15.146699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:34.528 [2024-10-15 11:10:15.146717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:34.788 #17 NEW cov: 12406 ft: 13128 corp: 4/214b lim: 90 exec/s: 0 rss: 73Mb L: 71/71 MS: 1 ChangeBit- 00:10:34.788 [2024-10-15 11:10:15.206752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.788 [2024-10-15 11:10:15.206783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.206830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:34.788 [2024-10-15 11:10:15.206849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.206879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:34.788 [2024-10-15 11:10:15.206896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.206925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:34.788 [2024-10-15 11:10:15.206942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:34.788 #18 NEW cov: 12491 ft: 13753 corp: 5/293b lim: 90 exec/s: 0 rss: 73Mb L: 79/79 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:10:34.788 [2024-10-15 11:10:15.296783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.788 [2024-10-15 11:10:15.296814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.788 #24 NEW cov: 12491 ft: 14675 corp: 6/328b lim: 90 exec/s: 0 rss: 73Mb L: 35/79 MS: 1 CrossOver- 00:10:34.788 [2024-10-15 11:10:15.387210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:34.788 [2024-10-15 11:10:15.387245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.387295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:34.788 [2024-10-15 11:10:15.387313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.387343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:34.788 [2024-10-15 11:10:15.387360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:34.788 [2024-10-15 11:10:15.387389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:34.788 [2024-10-15 11:10:15.387405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:35.047 #25 NEW cov: 12491 ft: 14761 corp: 7/408b lim: 90 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 CopyPart- 00:10:35.047 [2024-10-15 11:10:15.447323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.047 [2024-10-15 11:10:15.447353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.047 [2024-10-15 11:10:15.447402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:35.047 [2024-10-15 11:10:15.447420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:35.047 [2024-10-15 11:10:15.447451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:35.047 [2024-10-15 11:10:15.447468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:35.047 #26 NEW cov: 12491 ft: 14805 corp: 8/475b lim: 90 exec/s: 0 rss: 74Mb L: 67/80 MS: 1 EraseBytes- 00:10:35.047 [2024-10-15 11:10:15.537419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.047 [2024-10-15 11:10:15.537450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.047 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:35.047 #27 NEW cov: 12508 ft: 14922 corp: 9/510b lim: 90 exec/s: 0 rss: 74Mb L: 35/80 MS: 1 ChangeBit- 00:10:35.047 [2024-10-15 11:10:15.627669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.047 [2024-10-15 11:10:15.627701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.047 #29 NEW cov: 12508 ft: 14951 corp: 10/538b lim: 90 exec/s: 0 rss: 74Mb L: 28/80 MS: 2 ChangeByte-CrossOver- 00:10:35.390 [2024-10-15 11:10:15.687974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.390 [2024-10-15 11:10:15.688004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.390 [2024-10-15 11:10:15.688057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:35.390 [2024-10-15 11:10:15.688076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:35.390 [2024-10-15 11:10:15.688107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:35.390 [2024-10-15 11:10:15.688123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:35.390 [2024-10-15 11:10:15.688152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:35.390 [2024-10-15 11:10:15.688173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:35.390 #30 NEW cov: 12508 ft: 14996 corp: 11/618b lim: 90 exec/s: 30 rss: 74Mb L: 80/80 MS: 1 ChangeBit- 00:10:35.390 [2024-10-15 11:10:15.748069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.390 [2024-10-15 11:10:15.748100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.390 [2024-10-15 11:10:15.748150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:35.390 [2024-10-15 11:10:15.748169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:35.390 #31 NEW cov: 12508 ft: 15296 corp: 12/661b lim: 90 exec/s: 31 rss: 74Mb L: 43/80 MS: 1 CMP- DE: "\301$\242\340`\256+\000"- 00:10:35.390 [2024-10-15 11:10:15.808131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.390 [2024-10-15 11:10:15.808160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.390 #32 NEW cov: 12508 ft: 15370 corp: 13/696b lim: 90 exec/s: 32 rss: 74Mb L: 35/80 MS: 1 ChangeBit- 00:10:35.390 [2024-10-15 11:10:15.868321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.390 [2024-10-15 11:10:15.868350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.390 #33 NEW cov: 12508 ft: 15426 corp: 14/731b lim: 90 exec/s: 33 rss: 74Mb L: 35/80 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\015"- 00:10:35.391 [2024-10-15 11:10:15.928480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.391 [2024-10-15 11:10:15.928511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.756 #34 NEW cov: 12508 ft: 15500 corp: 15/759b lim: 90 exec/s: 34 rss: 74Mb L: 28/80 MS: 1 ChangeByte- 00:10:35.756 [2024-10-15 11:10:16.018724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.756 [2024-10-15 11:10:16.018754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.756 #35 NEW cov: 12508 ft: 15522 corp: 16/794b lim: 90 exec/s: 35 rss: 74Mb L: 35/80 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:10:35.756 [2024-10-15 11:10:16.109076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.756 [2024-10-15 11:10:16.109106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.756 [2024-10-15 11:10:16.109155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:35.756 [2024-10-15 11:10:16.109173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:35.756 [2024-10-15 11:10:16.109204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:35.756 [2024-10-15 11:10:16.109221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:35.756 #36 NEW cov: 12508 ft: 15565 corp: 17/865b lim: 90 exec/s: 36 rss: 74Mb L: 71/80 MS: 1 ChangeBinInt- 00:10:35.756 [2024-10-15 11:10:16.169073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.756 [2024-10-15 11:10:16.169103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.756 #37 NEW cov: 12508 ft: 15602 corp: 18/900b lim: 90 exec/s: 37 rss: 74Mb L: 35/80 MS: 1 ChangeBit- 00:10:35.756 [2024-10-15 11:10:16.259458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.756 [2024-10-15 11:10:16.259488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:35.756 [2024-10-15 11:10:16.259536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:35.756 [2024-10-15 11:10:16.259555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:35.756 [2024-10-15 11:10:16.259586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:35.756 [2024-10-15 11:10:16.259603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:35.756 #38 NEW cov: 12508 ft: 15644 corp: 19/962b lim: 90 exec/s: 38 rss: 74Mb L: 62/80 MS: 1 EraseBytes- 00:10:35.756 [2024-10-15 11:10:16.349650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:35.756 [2024-10-15 11:10:16.349681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.105 #39 NEW cov: 12508 ft: 15645 corp: 20/990b lim: 90 exec/s: 39 rss: 74Mb L: 28/80 MS: 1 CrossOver- 00:10:36.105 [2024-10-15 11:10:16.409883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:36.105 [2024-10-15 11:10:16.409913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.409962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:36.105 [2024-10-15 11:10:16.409980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.410011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:36.105 [2024-10-15 11:10:16.410034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.410064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:36.105 [2024-10-15 11:10:16.410081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:36.105 #45 NEW cov: 12508 ft: 15668 corp: 21/1069b lim: 90 exec/s: 45 rss: 74Mb L: 79/80 MS: 1 InsertRepeatedBytes- 00:10:36.105 [2024-10-15 11:10:16.469978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:36.105 [2024-10-15 11:10:16.470008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.470065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:36.105 [2024-10-15 11:10:16.470084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.470116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:36.105 [2024-10-15 11:10:16.470133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:36.105 #46 NEW cov: 12508 ft: 15733 corp: 22/1137b lim: 90 exec/s: 46 rss: 74Mb L: 68/80 MS: 1 EraseBytes- 00:10:36.105 [2024-10-15 11:10:16.560192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:36.105 [2024-10-15 11:10:16.560222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.105 [2024-10-15 11:10:16.560272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:36.106 [2024-10-15 11:10:16.560295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:36.106 #47 NEW cov: 12515 ft: 15810 corp: 23/1185b lim: 90 exec/s: 47 rss: 74Mb L: 48/80 MS: 1 CopyPart- 00:10:36.106 [2024-10-15 11:10:16.650473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:36.106 [2024-10-15 11:10:16.650503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.106 [2024-10-15 11:10:16.650553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:36.106 [2024-10-15 11:10:16.650571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:36.106 [2024-10-15 11:10:16.650602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:36.106 [2024-10-15 11:10:16.650619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:36.365 #48 NEW cov: 12515 ft: 15867 corp: 24/1247b lim: 90 exec/s: 24 rss: 74Mb L: 62/80 MS: 1 ShuffleBytes- 00:10:36.365 #48 DONE cov: 12515 ft: 15867 corp: 24/1247b lim: 90 exec/s: 24 rss: 74Mb 00:10:36.365 ###### Recommended dictionary. ###### 00:10:36.365 "\001\000\000\000\000\000\000\000" # Uses: 1 00:10:36.365 "\301$\242\340`\256+\000" # Uses: 0 00:10:36.365 "\001\000\000\000\000\000\000\015" # Uses: 0 00:10:36.365 ###### End of recommended dictionary. ###### 00:10:36.365 Done 48 runs in 2 second(s) 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:36.365 11:10:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:10:36.365 [2024-10-15 11:10:16.869641] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:36.366 [2024-10-15 11:10:16.869712] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721064 ] 00:10:36.625 [2024-10-15 11:10:17.042957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.625 [2024-10-15 11:10:17.082518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.625 [2024-10-15 11:10:17.141417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.625 [2024-10-15 11:10:17.157563] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:10:36.625 INFO: Running with entropic power schedule (0xFF, 100). 00:10:36.625 INFO: Seed: 3814686515 00:10:36.625 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:36.625 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:36.625 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:36.625 INFO: A corpus is not provided, starting from an empty corpus 00:10:36.625 #2 INITED exec/s: 0 rss: 66Mb 00:10:36.625 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:36.625 This may also happen if the target rejected all inputs we tried so far 00:10:36.625 [2024-10-15 11:10:17.212558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:36.625 [2024-10-15 11:10:17.212592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:36.625 [2024-10-15 11:10:17.212642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:36.625 [2024-10-15 11:10:17.212661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:36.625 [2024-10-15 11:10:17.212692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:36.625 [2024-10-15 11:10:17.212709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.142 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:10:37.142 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:37.142 #11 NEW cov: 12262 ft: 12261 corp: 2/34b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 4 ChangeBit-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:10:37.142 [2024-10-15 11:10:17.573726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.142 [2024-10-15 11:10:17.573769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.573804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.142 [2024-10-15 11:10:17.573822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.573850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.142 [2024-10-15 11:10:17.573866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.142 #12 NEW cov: 12375 ft: 12688 corp: 3/67b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:10:37.142 [2024-10-15 11:10:17.663851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.142 [2024-10-15 11:10:17.663884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.663917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.142 [2024-10-15 11:10:17.663935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.663969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.142 [2024-10-15 11:10:17.663985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.142 #13 NEW cov: 12381 ft: 13049 corp: 4/100b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:10:37.142 [2024-10-15 11:10:17.754091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.142 [2024-10-15 11:10:17.754121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.754171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.142 [2024-10-15 11:10:17.754191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.142 [2024-10-15 11:10:17.754221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.142 [2024-10-15 11:10:17.754238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.402 #14 NEW cov: 12466 ft: 13317 corp: 5/133b lim: 50 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:10:37.402 [2024-10-15 11:10:17.844393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.402 [2024-10-15 11:10:17.844426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.844461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.402 [2024-10-15 11:10:17.844480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.844513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.402 [2024-10-15 11:10:17.844530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.402 #15 NEW cov: 12466 ft: 13458 corp: 6/167b lim: 50 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:10:37.402 [2024-10-15 11:10:17.904438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.402 [2024-10-15 11:10:17.904468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.904517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.402 [2024-10-15 11:10:17.904535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.904566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.402 [2024-10-15 11:10:17.904584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.402 #19 NEW cov: 12466 ft: 13505 corp: 7/205b lim: 50 exec/s: 0 rss: 73Mb L: 38/38 MS: 4 CrossOver-InsertByte-ChangeBinInt-CrossOver- 00:10:37.402 [2024-10-15 11:10:17.964606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.402 [2024-10-15 11:10:17.964635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.964684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.402 [2024-10-15 11:10:17.964703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:17.964734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.402 [2024-10-15 11:10:17.964755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.402 #20 NEW cov: 12466 ft: 13571 corp: 8/238b lim: 50 exec/s: 0 rss: 73Mb L: 33/38 MS: 1 ChangeBit- 00:10:37.402 [2024-10-15 11:10:18.014745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.402 [2024-10-15 11:10:18.014775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:18.014823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.402 [2024-10-15 11:10:18.014841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.402 [2024-10-15 11:10:18.014873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.403 [2024-10-15 11:10:18.014889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.661 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:37.661 #21 NEW cov: 12483 ft: 13624 corp: 9/271b lim: 50 exec/s: 0 rss: 74Mb L: 33/38 MS: 1 ShuffleBytes- 00:10:37.661 [2024-10-15 11:10:18.104870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.661 [2024-10-15 11:10:18.104900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.661 #22 NEW cov: 12483 ft: 14499 corp: 10/283b lim: 50 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 CrossOver- 00:10:37.661 [2024-10-15 11:10:18.165002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.661 [2024-10-15 11:10:18.165040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.661 #23 NEW cov: 12483 ft: 14598 corp: 11/295b lim: 50 exec/s: 23 rss: 74Mb L: 12/38 MS: 1 ChangeByte- 00:10:37.661 [2024-10-15 11:10:18.255397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.661 [2024-10-15 11:10:18.255427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.661 [2024-10-15 11:10:18.255476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.661 [2024-10-15 11:10:18.255494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.661 [2024-10-15 11:10:18.255525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.661 [2024-10-15 11:10:18.255542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.920 #24 NEW cov: 12483 ft: 14683 corp: 12/330b lim: 50 exec/s: 24 rss: 74Mb L: 35/38 MS: 1 CMP- DE: "\004\000"- 00:10:37.920 [2024-10-15 11:10:18.315527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.920 [2024-10-15 11:10:18.315557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.920 [2024-10-15 11:10:18.315606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.920 [2024-10-15 11:10:18.315624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.920 [2024-10-15 11:10:18.315655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.920 [2024-10-15 11:10:18.315672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.920 #30 NEW cov: 12483 ft: 14736 corp: 13/364b lim: 50 exec/s: 30 rss: 74Mb L: 34/38 MS: 1 InsertByte- 00:10:37.920 [2024-10-15 11:10:18.365655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.920 [2024-10-15 11:10:18.365684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.920 [2024-10-15 11:10:18.365733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.920 [2024-10-15 11:10:18.365751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.920 [2024-10-15 11:10:18.365781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.920 [2024-10-15 11:10:18.365799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.920 #31 NEW cov: 12483 ft: 14755 corp: 14/398b lim: 50 exec/s: 31 rss: 74Mb L: 34/38 MS: 1 ChangeBinInt- 00:10:37.921 [2024-10-15 11:10:18.455913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.921 [2024-10-15 11:10:18.455942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.921 [2024-10-15 11:10:18.455991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.921 [2024-10-15 11:10:18.456009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.921 [2024-10-15 11:10:18.456047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.921 [2024-10-15 11:10:18.456065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:37.921 #32 NEW cov: 12483 ft: 14777 corp: 15/433b lim: 50 exec/s: 32 rss: 74Mb L: 35/38 MS: 1 PersAutoDict- DE: "\004\000"- 00:10:37.921 [2024-10-15 11:10:18.506010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:37.921 [2024-10-15 11:10:18.506045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:37.921 [2024-10-15 11:10:18.506093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:37.921 [2024-10-15 11:10:18.506111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:37.921 [2024-10-15 11:10:18.506143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:37.921 [2024-10-15 11:10:18.506160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.180 #33 NEW cov: 12483 ft: 14832 corp: 16/467b lim: 50 exec/s: 33 rss: 74Mb L: 34/38 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:10:38.180 [2024-10-15 11:10:18.596295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.180 [2024-10-15 11:10:18.596325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.596373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.180 [2024-10-15 11:10:18.596391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.596422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.180 [2024-10-15 11:10:18.596440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.180 #34 NEW cov: 12483 ft: 14845 corp: 17/500b lim: 50 exec/s: 34 rss: 74Mb L: 33/38 MS: 1 ChangeBinInt- 00:10:38.180 [2024-10-15 11:10:18.646429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.180 [2024-10-15 11:10:18.646463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.646497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.180 [2024-10-15 11:10:18.646516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.646562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.180 [2024-10-15 11:10:18.646579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.180 #35 NEW cov: 12483 ft: 14881 corp: 18/533b lim: 50 exec/s: 35 rss: 74Mb L: 33/38 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:10:38.180 [2024-10-15 11:10:18.736664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.180 [2024-10-15 11:10:18.736694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.736743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.180 [2024-10-15 11:10:18.736763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.180 [2024-10-15 11:10:18.736795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.180 [2024-10-15 11:10:18.736812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.180 #36 NEW cov: 12483 ft: 14902 corp: 19/566b lim: 50 exec/s: 36 rss: 74Mb L: 33/38 MS: 1 CrossOver- 00:10:38.449 [2024-10-15 11:10:18.826875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.449 [2024-10-15 11:10:18.826906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.826954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.450 [2024-10-15 11:10:18.826972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.827004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.450 [2024-10-15 11:10:18.827021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.450 #37 NEW cov: 12483 ft: 14936 corp: 20/601b lim: 50 exec/s: 37 rss: 74Mb L: 35/38 MS: 1 InsertByte- 00:10:38.450 [2024-10-15 11:10:18.876974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.450 [2024-10-15 11:10:18.877003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.877058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.450 [2024-10-15 11:10:18.877077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.877108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.450 [2024-10-15 11:10:18.877125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.450 #38 NEW cov: 12483 ft: 14948 corp: 21/634b lim: 50 exec/s: 38 rss: 74Mb L: 33/38 MS: 1 ChangeBit- 00:10:38.450 [2024-10-15 11:10:18.927115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.450 [2024-10-15 11:10:18.927148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.927197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.450 [2024-10-15 11:10:18.927215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:18.927247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.450 [2024-10-15 11:10:18.927263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.450 #39 NEW cov: 12483 ft: 14970 corp: 22/665b lim: 50 exec/s: 39 rss: 74Mb L: 31/38 MS: 1 EraseBytes- 00:10:38.450 [2024-10-15 11:10:19.017392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.450 [2024-10-15 11:10:19.017421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:19.017470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.450 [2024-10-15 11:10:19.017488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:19.017520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.450 [2024-10-15 11:10:19.017537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.450 #40 NEW cov: 12483 ft: 14973 corp: 23/698b lim: 50 exec/s: 40 rss: 74Mb L: 33/38 MS: 1 ChangeBit- 00:10:38.450 [2024-10-15 11:10:19.067491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.450 [2024-10-15 11:10:19.067520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:19.067569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.450 [2024-10-15 11:10:19.067587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.450 [2024-10-15 11:10:19.067618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.450 [2024-10-15 11:10:19.067635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.711 #41 NEW cov: 12490 ft: 15026 corp: 24/733b lim: 50 exec/s: 41 rss: 74Mb L: 35/38 MS: 1 InsertByte- 00:10:38.711 [2024-10-15 11:10:19.157760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:38.711 [2024-10-15 11:10:19.157791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:38.711 [2024-10-15 11:10:19.157841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:38.711 [2024-10-15 11:10:19.157860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:38.711 [2024-10-15 11:10:19.157891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:38.711 [2024-10-15 11:10:19.157908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:38.711 #42 NEW cov: 12490 ft: 15033 corp: 25/770b lim: 50 exec/s: 21 rss: 74Mb L: 37/38 MS: 1 CrossOver- 00:10:38.711 #42 DONE cov: 12490 ft: 15033 corp: 25/770b lim: 50 exec/s: 21 rss: 74Mb 00:10:38.711 ###### Recommended dictionary. ###### 00:10:38.711 "\004\000" # Uses: 3 00:10:38.711 "\377\377\377\377\377\377\377\377" # Uses: 1 00:10:38.711 ###### End of recommended dictionary. ###### 00:10:38.711 Done 42 runs in 2 second(s) 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:38.711 11:10:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:10:38.711 [2024-10-15 11:10:19.340450] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:38.711 [2024-10-15 11:10:19.340521] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721333 ] 00:10:38.971 [2024-10-15 11:10:19.514405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.971 [2024-10-15 11:10:19.552844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.232 [2024-10-15 11:10:19.611800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.232 [2024-10-15 11:10:19.627943] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:10:39.232 INFO: Running with entropic power schedule (0xFF, 100). 00:10:39.232 INFO: Seed: 1988712208 00:10:39.232 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:39.232 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:39.232 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:39.232 INFO: A corpus is not provided, starting from an empty corpus 00:10:39.232 #2 INITED exec/s: 0 rss: 66Mb 00:10:39.232 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:39.232 This may also happen if the target rejected all inputs we tried so far 00:10:39.232 [2024-10-15 11:10:19.695136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.232 [2024-10-15 11:10:19.695178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.232 [2024-10-15 11:10:19.695258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.232 [2024-10-15 11:10:19.695279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.491 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:10:39.491 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:39.492 #3 NEW cov: 12288 ft: 12289 corp: 2/51b lim: 85 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:10:39.492 [2024-10-15 11:10:20.036056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.492 [2024-10-15 11:10:20.036108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.492 [2024-10-15 11:10:20.036216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.492 [2024-10-15 11:10:20.036237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.492 #4 NEW cov: 12401 ft: 12855 corp: 3/101b lim: 85 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:10:39.492 [2024-10-15 11:10:20.106312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.492 [2024-10-15 11:10:20.106348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.492 [2024-10-15 11:10:20.106426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.492 [2024-10-15 11:10:20.106448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.751 #5 NEW cov: 12407 ft: 13091 corp: 4/151b lim: 85 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:10:39.751 [2024-10-15 11:10:20.156431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.751 [2024-10-15 11:10:20.156464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.751 [2024-10-15 11:10:20.156553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.751 [2024-10-15 11:10:20.156568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.751 #6 NEW cov: 12492 ft: 13342 corp: 5/201b lim: 85 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:10:39.751 [2024-10-15 11:10:20.226735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.751 [2024-10-15 11:10:20.226764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.751 [2024-10-15 11:10:20.226827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.751 [2024-10-15 11:10:20.226846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.751 #12 NEW cov: 12492 ft: 13404 corp: 6/251b lim: 85 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CrossOver- 00:10:39.751 [2024-10-15 11:10:20.277308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.751 [2024-10-15 11:10:20.277338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.751 [2024-10-15 11:10:20.277425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.751 [2024-10-15 11:10:20.277443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:39.751 [2024-10-15 11:10:20.277528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:39.751 [2024-10-15 11:10:20.277550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:39.751 #13 NEW cov: 12492 ft: 13878 corp: 7/302b lim: 85 exec/s: 0 rss: 73Mb L: 51/51 MS: 1 InsertByte- 00:10:39.751 [2024-10-15 11:10:20.347112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:39.751 [2024-10-15 11:10:20.347140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:39.751 [2024-10-15 11:10:20.347201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:39.751 [2024-10-15 11:10:20.347230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.009 #19 NEW cov: 12492 ft: 13924 corp: 8/352b lim: 85 exec/s: 0 rss: 74Mb L: 50/51 MS: 1 CrossOver- 00:10:40.009 [2024-10-15 11:10:20.416990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.009 [2024-10-15 11:10:20.417019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.009 #20 NEW cov: 12492 ft: 14688 corp: 9/370b lim: 85 exec/s: 0 rss: 74Mb L: 18/51 MS: 1 CrossOver- 00:10:40.009 [2024-10-15 11:10:20.487631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.009 [2024-10-15 11:10:20.487661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.009 [2024-10-15 11:10:20.487749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.009 [2024-10-15 11:10:20.487767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.010 #21 NEW cov: 12492 ft: 14726 corp: 10/420b lim: 85 exec/s: 0 rss: 74Mb L: 50/51 MS: 1 ShuffleBytes- 00:10:40.010 [2024-10-15 11:10:20.537933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.010 [2024-10-15 11:10:20.537966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.010 [2024-10-15 11:10:20.538045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.010 [2024-10-15 11:10:20.538068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.010 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:40.010 #22 NEW cov: 12515 ft: 14780 corp: 11/470b lim: 85 exec/s: 0 rss: 74Mb L: 50/51 MS: 1 ChangeByte- 00:10:40.010 [2024-10-15 11:10:20.588507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.010 [2024-10-15 11:10:20.588538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.010 [2024-10-15 11:10:20.588615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.010 [2024-10-15 11:10:20.588636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.010 [2024-10-15 11:10:20.588696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.010 [2024-10-15 11:10:20.588719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.010 #23 NEW cov: 12515 ft: 14800 corp: 12/527b lim: 85 exec/s: 0 rss: 74Mb L: 57/57 MS: 1 CrossOver- 00:10:40.268 [2024-10-15 11:10:20.658918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.268 [2024-10-15 11:10:20.658949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.268 [2024-10-15 11:10:20.659019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.268 [2024-10-15 11:10:20.659042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.268 [2024-10-15 11:10:20.659106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.268 [2024-10-15 11:10:20.659124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.268 #24 NEW cov: 12515 ft: 14801 corp: 13/578b lim: 85 exec/s: 24 rss: 74Mb L: 51/57 MS: 1 InsertByte- 00:10:40.269 [2024-10-15 11:10:20.729034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.269 [2024-10-15 11:10:20.729069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.269 [2024-10-15 11:10:20.729147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.269 [2024-10-15 11:10:20.729168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.269 [2024-10-15 11:10:20.729232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.269 [2024-10-15 11:10:20.729254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.269 #25 NEW cov: 12515 ft: 14816 corp: 14/633b lim: 85 exec/s: 25 rss: 74Mb L: 55/57 MS: 1 CopyPart- 00:10:40.269 [2024-10-15 11:10:20.788818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.269 [2024-10-15 11:10:20.788854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.269 [2024-10-15 11:10:20.788961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.269 [2024-10-15 11:10:20.788982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.269 #26 NEW cov: 12515 ft: 14853 corp: 15/683b lim: 85 exec/s: 26 rss: 74Mb L: 50/57 MS: 1 CopyPart- 00:10:40.269 [2024-10-15 11:10:20.839456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.269 [2024-10-15 11:10:20.839491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.269 [2024-10-15 11:10:20.839558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.269 [2024-10-15 11:10:20.839576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.269 [2024-10-15 11:10:20.839658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.269 [2024-10-15 11:10:20.839678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.269 #27 NEW cov: 12515 ft: 14885 corp: 16/738b lim: 85 exec/s: 27 rss: 74Mb L: 55/57 MS: 1 ChangeByte- 00:10:40.529 [2024-10-15 11:10:20.909350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.529 [2024-10-15 11:10:20.909381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:20.909463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.529 [2024-10-15 11:10:20.909481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.529 #28 NEW cov: 12515 ft: 14888 corp: 17/788b lim: 85 exec/s: 28 rss: 74Mb L: 50/57 MS: 1 ChangeBinInt- 00:10:40.529 [2024-10-15 11:10:20.979803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.529 [2024-10-15 11:10:20.979832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:20.979921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.529 [2024-10-15 11:10:20.979940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:20.980018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.529 [2024-10-15 11:10:20.980039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.529 #29 NEW cov: 12515 ft: 14909 corp: 18/843b lim: 85 exec/s: 29 rss: 74Mb L: 55/57 MS: 1 CopyPart- 00:10:40.529 [2024-10-15 11:10:21.050062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.529 [2024-10-15 11:10:21.050093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:21.050183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.529 [2024-10-15 11:10:21.050201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:21.050285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.529 [2024-10-15 11:10:21.050303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.529 #30 NEW cov: 12515 ft: 14910 corp: 19/894b lim: 85 exec/s: 30 rss: 74Mb L: 51/57 MS: 1 CopyPart- 00:10:40.529 [2024-10-15 11:10:21.099625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.529 [2024-10-15 11:10:21.099653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.529 #31 NEW cov: 12515 ft: 14932 corp: 20/926b lim: 85 exec/s: 31 rss: 74Mb L: 32/57 MS: 1 EraseBytes- 00:10:40.529 [2024-10-15 11:10:21.150581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.529 [2024-10-15 11:10:21.150609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:21.150703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.529 [2024-10-15 11:10:21.150721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.529 [2024-10-15 11:10:21.150799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.529 [2024-10-15 11:10:21.150818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.788 #37 NEW cov: 12515 ft: 14978 corp: 21/977b lim: 85 exec/s: 37 rss: 74Mb L: 51/57 MS: 1 InsertByte- 00:10:40.788 [2024-10-15 11:10:21.221217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.788 [2024-10-15 11:10:21.221246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.788 [2024-10-15 11:10:21.221334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.788 [2024-10-15 11:10:21.221353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.788 [2024-10-15 11:10:21.221416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.788 [2024-10-15 11:10:21.221438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.788 [2024-10-15 11:10:21.221546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:40.788 [2024-10-15 11:10:21.221567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:40.788 #38 NEW cov: 12515 ft: 15363 corp: 22/1047b lim: 85 exec/s: 38 rss: 74Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:10:40.788 [2024-10-15 11:10:21.271051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.788 [2024-10-15 11:10:21.271083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.788 [2024-10-15 11:10:21.271162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.788 [2024-10-15 11:10:21.271182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.789 [2024-10-15 11:10:21.271239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.789 [2024-10-15 11:10:21.271257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.789 #39 NEW cov: 12515 ft: 15374 corp: 23/1105b lim: 85 exec/s: 39 rss: 74Mb L: 58/70 MS: 1 InsertRepeatedBytes- 00:10:40.789 [2024-10-15 11:10:21.321261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.789 [2024-10-15 11:10:21.321290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.789 [2024-10-15 11:10:21.321365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:40.789 [2024-10-15 11:10:21.321382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:40.789 [2024-10-15 11:10:21.321463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:40.789 [2024-10-15 11:10:21.321484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:40.789 #40 NEW cov: 12515 ft: 15444 corp: 24/1160b lim: 85 exec/s: 40 rss: 74Mb L: 55/70 MS: 1 ChangeBinInt- 00:10:40.789 [2024-10-15 11:10:21.370842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:40.789 [2024-10-15 11:10:21.370870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:40.789 #41 NEW cov: 12515 ft: 15457 corp: 25/1192b lim: 85 exec/s: 41 rss: 74Mb L: 32/70 MS: 1 ShuffleBytes- 00:10:41.048 [2024-10-15 11:10:21.441346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:41.048 [2024-10-15 11:10:21.441374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.441437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:41.048 [2024-10-15 11:10:21.441455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:41.048 #42 NEW cov: 12515 ft: 15476 corp: 26/1242b lim: 85 exec/s: 42 rss: 74Mb L: 50/70 MS: 1 ChangeByte- 00:10:41.048 [2024-10-15 11:10:21.492742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:41.048 [2024-10-15 11:10:21.492772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.492867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:41.048 [2024-10-15 11:10:21.492891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.492959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:41.048 [2024-10-15 11:10:21.492977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.493065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:41.048 [2024-10-15 11:10:21.493086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.493180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:10:41.048 [2024-10-15 11:10:21.493198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:41.048 #43 NEW cov: 12515 ft: 15537 corp: 27/1327b lim: 85 exec/s: 43 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:10:41.048 [2024-10-15 11:10:21.562436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:41.048 [2024-10-15 11:10:21.562464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.562546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:41.048 [2024-10-15 11:10:21.562565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:41.048 [2024-10-15 11:10:21.562645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:41.048 [2024-10-15 11:10:21.562662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:41.048 #44 NEW cov: 12515 ft: 15579 corp: 28/1386b lim: 85 exec/s: 44 rss: 74Mb L: 59/85 MS: 1 InsertRepeatedBytes- 00:10:41.048 [2024-10-15 11:10:21.612183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:41.048 [2024-10-15 11:10:21.612212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.048 #45 NEW cov: 12515 ft: 15598 corp: 29/1404b lim: 85 exec/s: 45 rss: 74Mb L: 18/85 MS: 1 ChangeByte- 00:10:41.307 [2024-10-15 11:10:21.682858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:41.307 [2024-10-15 11:10:21.682886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.307 [2024-10-15 11:10:21.682961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:41.307 [2024-10-15 11:10:21.682979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:41.307 #46 NEW cov: 12515 ft: 15601 corp: 30/1454b lim: 85 exec/s: 23 rss: 75Mb L: 50/85 MS: 1 ChangeBinInt- 00:10:41.307 #46 DONE cov: 12515 ft: 15601 corp: 30/1454b lim: 85 exec/s: 23 rss: 75Mb 00:10:41.307 Done 46 runs in 2 second(s) 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:41.307 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:41.308 11:10:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:10:41.308 [2024-10-15 11:10:21.851870] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:41.308 [2024-10-15 11:10:21.851935] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721620 ] 00:10:41.567 [2024-10-15 11:10:22.027828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.567 [2024-10-15 11:10:22.067649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.567 [2024-10-15 11:10:22.126726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.567 [2024-10-15 11:10:22.142880] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:10:41.567 INFO: Running with entropic power schedule (0xFF, 100). 00:10:41.567 INFO: Seed: 209738157 00:10:41.567 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:41.567 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:41.567 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:41.567 INFO: A corpus is not provided, starting from an empty corpus 00:10:41.567 #2 INITED exec/s: 0 rss: 66Mb 00:10:41.567 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:41.567 This may also happen if the target rejected all inputs we tried so far 00:10:41.567 [2024-10-15 11:10:22.188485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:41.567 [2024-10-15 11:10:22.188518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:41.567 [2024-10-15 11:10:22.188556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:41.567 [2024-10-15 11:10:22.188571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:41.567 [2024-10-15 11:10:22.188624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:41.567 [2024-10-15 11:10:22.188639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:41.567 [2024-10-15 11:10:22.188691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:41.567 [2024-10-15 11:10:22.188706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.086 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:10:42.086 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:42.086 #5 NEW cov: 12221 ft: 12220 corp: 2/21b lim: 25 exec/s: 0 rss: 73Mb L: 20/20 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:10:42.086 [2024-10-15 11:10:22.529374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.086 [2024-10-15 11:10:22.529411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.529452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.086 [2024-10-15 11:10:22.529467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.529537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.086 [2024-10-15 11:10:22.529553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.529607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.086 [2024-10-15 11:10:22.529622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.086 #6 NEW cov: 12334 ft: 12721 corp: 3/45b lim: 25 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 CopyPart- 00:10:42.086 [2024-10-15 11:10:22.589489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.086 [2024-10-15 11:10:22.589519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.589567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.086 [2024-10-15 11:10:22.589583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.589634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.086 [2024-10-15 11:10:22.589650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.589704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.086 [2024-10-15 11:10:22.589719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.086 #7 NEW cov: 12340 ft: 13119 corp: 4/65b lim: 25 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 ShuffleBytes- 00:10:42.086 [2024-10-15 11:10:22.629354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.086 [2024-10-15 11:10:22.629381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.629426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.086 [2024-10-15 11:10:22.629443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.086 #8 NEW cov: 12425 ft: 13812 corp: 5/77b lim: 25 exec/s: 0 rss: 73Mb L: 12/24 MS: 1 EraseBytes- 00:10:42.086 [2024-10-15 11:10:22.689752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.086 [2024-10-15 11:10:22.689781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.689828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.086 [2024-10-15 11:10:22.689847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.689900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.086 [2024-10-15 11:10:22.689914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.086 [2024-10-15 11:10:22.689968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.086 [2024-10-15 11:10:22.689985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.346 #9 NEW cov: 12425 ft: 13912 corp: 6/97b lim: 25 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 CopyPart- 00:10:42.346 [2024-10-15 11:10:22.749893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.749919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.749974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.749990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.750063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.346 [2024-10-15 11:10:22.750080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.750131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.346 [2024-10-15 11:10:22.750147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.346 #10 NEW cov: 12425 ft: 13987 corp: 7/117b lim: 25 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 ChangeByte- 00:10:42.346 [2024-10-15 11:10:22.789784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.789811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.789850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.789864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 #13 NEW cov: 12425 ft: 14154 corp: 8/127b lim: 25 exec/s: 0 rss: 73Mb L: 10/24 MS: 3 CopyPart-CopyPart-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:10:42.346 [2024-10-15 11:10:22.830141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.830167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.830220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.830236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.830288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.346 [2024-10-15 11:10:22.830303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.830358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.346 [2024-10-15 11:10:22.830373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.346 #14 NEW cov: 12425 ft: 14202 corp: 9/147b lim: 25 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 ShuffleBytes- 00:10:42.346 [2024-10-15 11:10:22.870251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.870277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.870331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.870347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.870399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.346 [2024-10-15 11:10:22.870414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.870468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.346 [2024-10-15 11:10:22.870482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.346 #15 NEW cov: 12425 ft: 14231 corp: 10/167b lim: 25 exec/s: 0 rss: 73Mb L: 20/24 MS: 1 ShuffleBytes- 00:10:42.346 [2024-10-15 11:10:22.930446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.930473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.930526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.930541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.930593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.346 [2024-10-15 11:10:22.930608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.930661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.346 [2024-10-15 11:10:22.930677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.346 #16 NEW cov: 12425 ft: 14328 corp: 11/187b lim: 25 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBinInt- 00:10:42.346 [2024-10-15 11:10:22.970579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.346 [2024-10-15 11:10:22.970606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.970658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.346 [2024-10-15 11:10:22.970674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.970726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.346 [2024-10-15 11:10:22.970741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.346 [2024-10-15 11:10:22.970797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.346 [2024-10-15 11:10:22.970813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.606 #17 NEW cov: 12425 ft: 14347 corp: 12/208b lim: 25 exec/s: 0 rss: 74Mb L: 21/24 MS: 1 InsertByte- 00:10:42.606 [2024-10-15 11:10:23.010664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.606 [2024-10-15 11:10:23.010694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.010741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.606 [2024-10-15 11:10:23.010757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.010807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.606 [2024-10-15 11:10:23.010823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.010876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.606 [2024-10-15 11:10:23.010892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.606 #18 NEW cov: 12425 ft: 14368 corp: 13/228b lim: 25 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBit- 00:10:42.606 [2024-10-15 11:10:23.070841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.606 [2024-10-15 11:10:23.070870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.070921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.606 [2024-10-15 11:10:23.070936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.071005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.606 [2024-10-15 11:10:23.071020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.071080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.606 [2024-10-15 11:10:23.071097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.606 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:42.606 #19 NEW cov: 12448 ft: 14477 corp: 14/248b lim: 25 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBinInt- 00:10:42.606 [2024-10-15 11:10:23.110959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.606 [2024-10-15 11:10:23.110987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.111044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.606 [2024-10-15 11:10:23.111061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.111113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.606 [2024-10-15 11:10:23.111128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.111182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.606 [2024-10-15 11:10:23.111197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.606 #20 NEW cov: 12448 ft: 14518 corp: 15/272b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:10:42.606 [2024-10-15 11:10:23.150825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.606 [2024-10-15 11:10:23.150853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.150909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.606 [2024-10-15 11:10:23.150924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.606 #21 NEW cov: 12448 ft: 14540 corp: 16/284b lim: 25 exec/s: 21 rss: 74Mb L: 12/24 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:10:42.606 [2024-10-15 11:10:23.211249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.606 [2024-10-15 11:10:23.211278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.211331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.606 [2024-10-15 11:10:23.211347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.211398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.606 [2024-10-15 11:10:23.211415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.606 [2024-10-15 11:10:23.211466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.606 [2024-10-15 11:10:23.211481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.866 #22 NEW cov: 12448 ft: 14561 corp: 17/304b lim: 25 exec/s: 22 rss: 74Mb L: 20/24 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:10:42.866 [2024-10-15 11:10:23.271192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.271219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.271274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.271289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 #23 NEW cov: 12448 ft: 14577 corp: 18/314b lim: 25 exec/s: 23 rss: 74Mb L: 10/24 MS: 1 EraseBytes- 00:10:42.866 [2024-10-15 11:10:23.311607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.311636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.311708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.311723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.311774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.866 [2024-10-15 11:10:23.311790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.311843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.866 [2024-10-15 11:10:23.311859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.311914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:42.866 [2024-10-15 11:10:23.311930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:42.866 #24 NEW cov: 12448 ft: 14639 corp: 19/339b lim: 25 exec/s: 24 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:10:42.866 [2024-10-15 11:10:23.351747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.351778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.351844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.351861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.351913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.866 [2024-10-15 11:10:23.351928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.351982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.866 [2024-10-15 11:10:23.351997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.352056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:42.866 [2024-10-15 11:10:23.352073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:42.866 #25 NEW cov: 12448 ft: 14651 corp: 20/364b lim: 25 exec/s: 25 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:10:42.866 [2024-10-15 11:10:23.411912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.411939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.411994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.412008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.412063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.866 [2024-10-15 11:10:23.412077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.412130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.866 [2024-10-15 11:10:23.412146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.412200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:42.866 [2024-10-15 11:10:23.412214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:42.866 #26 NEW cov: 12448 ft: 14665 corp: 21/389b lim: 25 exec/s: 26 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:10:42.866 [2024-10-15 11:10:23.451898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.451924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.451983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.451998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.452057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.866 [2024-10-15 11:10:23.452071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.452127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.866 [2024-10-15 11:10:23.452149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:42.866 #27 NEW cov: 12448 ft: 14671 corp: 22/409b lim: 25 exec/s: 27 rss: 74Mb L: 20/25 MS: 1 ChangeByte- 00:10:42.866 [2024-10-15 11:10:23.492038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:42.866 [2024-10-15 11:10:23.492065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.492122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:42.866 [2024-10-15 11:10:23.492137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.492190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:42.866 [2024-10-15 11:10:23.492205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:42.866 [2024-10-15 11:10:23.492258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:42.866 [2024-10-15 11:10:23.492271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.126 #28 NEW cov: 12448 ft: 14688 corp: 23/429b lim: 25 exec/s: 28 rss: 74Mb L: 20/25 MS: 1 ChangeBit- 00:10:43.126 [2024-10-15 11:10:23.532149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.126 [2024-10-15 11:10:23.532176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.532248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.126 [2024-10-15 11:10:23.532263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.532317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.126 [2024-10-15 11:10:23.532331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.532387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.126 [2024-10-15 11:10:23.532402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.126 #29 NEW cov: 12448 ft: 14729 corp: 24/449b lim: 25 exec/s: 29 rss: 75Mb L: 20/25 MS: 1 ChangeByte- 00:10:43.126 [2024-10-15 11:10:23.592342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.126 [2024-10-15 11:10:23.592368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.592440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.126 [2024-10-15 11:10:23.592454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.592508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.126 [2024-10-15 11:10:23.592524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.592578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.126 [2024-10-15 11:10:23.592594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.126 #30 NEW cov: 12448 ft: 14775 corp: 25/473b lim: 25 exec/s: 30 rss: 75Mb L: 24/25 MS: 1 CopyPart- 00:10:43.126 [2024-10-15 11:10:23.652386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.126 [2024-10-15 11:10:23.652412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.652469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.126 [2024-10-15 11:10:23.652485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.652539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.126 [2024-10-15 11:10:23.652555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.126 #31 NEW cov: 12448 ft: 14994 corp: 26/490b lim: 25 exec/s: 31 rss: 75Mb L: 17/25 MS: 1 EraseBytes- 00:10:43.126 [2024-10-15 11:10:23.712630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.126 [2024-10-15 11:10:23.712656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.712727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.126 [2024-10-15 11:10:23.712743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.712796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.126 [2024-10-15 11:10:23.712812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.126 [2024-10-15 11:10:23.712867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.126 [2024-10-15 11:10:23.712882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.126 #32 NEW cov: 12448 ft: 15036 corp: 27/510b lim: 25 exec/s: 32 rss: 75Mb L: 20/25 MS: 1 ChangeBinInt- 00:10:43.126 [2024-10-15 11:10:23.752431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.126 [2024-10-15 11:10:23.752457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 #33 NEW cov: 12448 ft: 15485 corp: 28/516b lim: 25 exec/s: 33 rss: 75Mb L: 6/25 MS: 1 EraseBytes- 00:10:43.386 [2024-10-15 11:10:23.792875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.386 [2024-10-15 11:10:23.792903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.792975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.386 [2024-10-15 11:10:23.792991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.793050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.386 [2024-10-15 11:10:23.793066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.793122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.386 [2024-10-15 11:10:23.793138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.386 #34 NEW cov: 12448 ft: 15516 corp: 29/536b lim: 25 exec/s: 34 rss: 75Mb L: 20/25 MS: 1 ChangeBinInt- 00:10:43.386 [2024-10-15 11:10:23.832990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.386 [2024-10-15 11:10:23.833017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.833102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.386 [2024-10-15 11:10:23.833134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.833190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.386 [2024-10-15 11:10:23.833206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.833260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.386 [2024-10-15 11:10:23.833276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.386 #35 NEW cov: 12448 ft: 15537 corp: 30/556b lim: 25 exec/s: 35 rss: 75Mb L: 20/25 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:10:43.386 [2024-10-15 11:10:23.893190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.386 [2024-10-15 11:10:23.893216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.893284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.386 [2024-10-15 11:10:23.893301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.893357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.386 [2024-10-15 11:10:23.893373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.893429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.386 [2024-10-15 11:10:23.893443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.386 #36 NEW cov: 12448 ft: 15539 corp: 31/576b lim: 25 exec/s: 36 rss: 75Mb L: 20/25 MS: 1 CrossOver- 00:10:43.386 [2024-10-15 11:10:23.933091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.386 [2024-10-15 11:10:23.933117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.933174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.386 [2024-10-15 11:10:23.933189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.386 #37 NEW cov: 12448 ft: 15625 corp: 32/586b lim: 25 exec/s: 37 rss: 75Mb L: 10/25 MS: 1 ChangeBinInt- 00:10:43.386 [2024-10-15 11:10:23.993398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.386 [2024-10-15 11:10:23.993424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.386 [2024-10-15 11:10:23.993493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.387 [2024-10-15 11:10:23.993511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.387 [2024-10-15 11:10:23.993566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.387 [2024-10-15 11:10:23.993582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.646 #38 NEW cov: 12448 ft: 15678 corp: 33/604b lim: 25 exec/s: 38 rss: 75Mb L: 18/25 MS: 1 CrossOver- 00:10:43.646 [2024-10-15 11:10:24.053593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.646 [2024-10-15 11:10:24.053619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.646 [2024-10-15 11:10:24.053694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.646 [2024-10-15 11:10:24.053709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.646 [2024-10-15 11:10:24.053764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.646 [2024-10-15 11:10:24.053779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.646 [2024-10-15 11:10:24.053832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.646 [2024-10-15 11:10:24.053846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.646 #39 NEW cov: 12448 ft: 15682 corp: 34/624b lim: 25 exec/s: 39 rss: 75Mb L: 20/25 MS: 1 ChangeBit- 00:10:43.646 [2024-10-15 11:10:24.093845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.646 [2024-10-15 11:10:24.093871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.646 [2024-10-15 11:10:24.093940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.647 [2024-10-15 11:10:24.093957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.094011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.647 [2024-10-15 11:10:24.094035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.094088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.647 [2024-10-15 11:10:24.094104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.094170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:43.647 [2024-10-15 11:10:24.094186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:43.647 #40 NEW cov: 12448 ft: 15696 corp: 35/649b lim: 25 exec/s: 40 rss: 75Mb L: 25/25 MS: 1 ChangeBit- 00:10:43.647 [2024-10-15 11:10:24.153919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:43.647 [2024-10-15 11:10:24.153947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.154003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:43.647 [2024-10-15 11:10:24.154019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.154077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:43.647 [2024-10-15 11:10:24.154094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:43.647 [2024-10-15 11:10:24.154148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:43.647 [2024-10-15 11:10:24.154164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:43.647 #41 NEW cov: 12448 ft: 15732 corp: 36/673b lim: 25 exec/s: 20 rss: 75Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:10:43.647 #41 DONE cov: 12448 ft: 15732 corp: 36/673b lim: 25 exec/s: 20 rss: 75Mb 00:10:43.647 ###### Recommended dictionary. ###### 00:10:43.647 "\001\000\000\000\000\000\000\000" # Uses: 3 00:10:43.647 ###### End of recommended dictionary. ###### 00:10:43.647 Done 41 runs in 2 second(s) 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:43.906 11:10:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:10:43.906 [2024-10-15 11:10:24.344615] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:43.906 [2024-10-15 11:10:24.344685] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721982 ] 00:10:43.906 [2024-10-15 11:10:24.523620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.166 [2024-10-15 11:10:24.563805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.166 [2024-10-15 11:10:24.622923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.166 [2024-10-15 11:10:24.639096] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:10:44.166 INFO: Running with entropic power schedule (0xFF, 100). 00:10:44.166 INFO: Seed: 2703740681 00:10:44.166 INFO: Loaded 1 modules (385236 inline 8-bit counters): 385236 [0x2c0170c, 0x2c5f7e0), 00:10:44.166 INFO: Loaded 1 PC tables (385236 PCs): 385236 [0x2c5f7e0,0x3240520), 00:10:44.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:44.166 INFO: A corpus is not provided, starting from an empty corpus 00:10:44.166 #2 INITED exec/s: 0 rss: 66Mb 00:10:44.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:44.166 This may also happen if the target rejected all inputs we tried so far 00:10:44.166 [2024-10-15 11:10:24.708931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.166 [2024-10-15 11:10:24.708974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.166 [2024-10-15 11:10:24.709082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.166 [2024-10-15 11:10:24.709104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.166 [2024-10-15 11:10:24.709187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.166 [2024-10-15 11:10:24.709210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.425 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:10:44.425 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:44.425 #10 NEW cov: 12293 ft: 12294 corp: 2/63b lim: 100 exec/s: 0 rss: 73Mb L: 62/62 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:10:44.425 [2024-10-15 11:10:25.049800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.425 [2024-10-15 11:10:25.049842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.425 [2024-10-15 11:10:25.049923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382355999739 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.425 [2024-10-15 11:10:25.049944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.425 [2024-10-15 11:10:25.050045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.425 [2024-10-15 11:10:25.050065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.684 #11 NEW cov: 12406 ft: 12901 corp: 3/125b lim: 100 exec/s: 0 rss: 73Mb L: 62/62 MS: 1 ChangeByte- 00:10:44.684 [2024-10-15 11:10:25.120331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.120361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.120428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.120447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.120513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.120532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.120628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.120646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:44.684 #12 NEW cov: 12412 ft: 13522 corp: 4/208b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:10:44.684 [2024-10-15 11:10:25.169449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.169482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.684 #13 NEW cov: 12497 ft: 14643 corp: 5/241b lim: 100 exec/s: 0 rss: 73Mb L: 33/83 MS: 1 EraseBytes- 00:10:44.684 [2024-10-15 11:10:25.240717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.240748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.240835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.240857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.240924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157343645319822331 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.240942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.241043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15553137160186484695 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.241063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:44.684 #14 NEW cov: 12497 ft: 14695 corp: 6/333b lim: 100 exec/s: 0 rss: 73Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:10:44.684 [2024-10-15 11:10:25.290525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.290553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.290627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.290644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.684 [2024-10-15 11:10:25.290710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.684 [2024-10-15 11:10:25.290727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.684 #15 NEW cov: 12497 ft: 14744 corp: 7/395b lim: 100 exec/s: 0 rss: 73Mb L: 62/92 MS: 1 CMP- DE: "\000\000\000\000"- 00:10:44.943 [2024-10-15 11:10:25.340947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.943 [2024-10-15 11:10:25.340980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.943 [2024-10-15 11:10:25.341044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744056462310399 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.943 [2024-10-15 11:10:25.341063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:44.943 [2024-10-15 11:10:25.341126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.943 [2024-10-15 11:10:25.341146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:44.943 #16 NEW cov: 12497 ft: 14781 corp: 8/461b lim: 100 exec/s: 0 rss: 73Mb L: 66/92 MS: 1 InsertRepeatedBytes- 00:10:44.943 [2024-10-15 11:10:25.390350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.943 [2024-10-15 11:10:25.390380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.943 #17 NEW cov: 12497 ft: 14866 corp: 9/494b lim: 100 exec/s: 0 rss: 73Mb L: 33/92 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:10:44.943 [2024-10-15 11:10:25.460619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18100806912038403067 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.943 [2024-10-15 11:10:25.460648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.944 #18 NEW cov: 12497 ft: 14884 corp: 10/527b lim: 100 exec/s: 0 rss: 73Mb L: 33/92 MS: 1 ChangeByte- 00:10:44.944 [2024-10-15 11:10:25.510894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18100806912038403067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:44.944 [2024-10-15 11:10:25.510924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:44.944 #19 NEW cov: 12497 ft: 14936 corp: 11/560b lim: 100 exec/s: 0 rss: 74Mb L: 33/92 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:10:45.203 [2024-10-15 11:10:25.582570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.203 [2024-10-15 11:10:25.582600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.203 [2024-10-15 11:10:25.582689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.203 [2024-10-15 11:10:25.582705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.203 [2024-10-15 11:10:25.582779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157343645319822331 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.203 [2024-10-15 11:10:25.582795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.203 [2024-10-15 11:10:25.582887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15553137160186484695 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.203 [2024-10-15 11:10:25.582907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.203 NEW_FUNC[1/1]: 0x1c07588 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:45.203 #20 NEW cov: 12520 ft: 14997 corp: 12/651b lim: 100 exec/s: 0 rss: 74Mb L: 91/92 MS: 1 EraseBytes- 00:10:45.203 [2024-10-15 11:10:25.652381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.203 [2024-10-15 11:10:25.652411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.204 [2024-10-15 11:10:25.652492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.652511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.204 [2024-10-15 11:10:25.652568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.652587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.204 #21 NEW cov: 12520 ft: 15043 corp: 13/713b lim: 100 exec/s: 0 rss: 74Mb L: 62/92 MS: 1 CopyPart- 00:10:45.204 [2024-10-15 11:10:25.702666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.702696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.204 [2024-10-15 11:10:25.702760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.702779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.204 [2024-10-15 11:10:25.702831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.702851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.204 #27 NEW cov: 12520 ft: 15065 corp: 14/775b lim: 100 exec/s: 27 rss: 74Mb L: 62/92 MS: 1 ShuffleBytes- 00:10:45.204 [2024-10-15 11:10:25.752047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18100806912038403067 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.752076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.204 #28 NEW cov: 12520 ft: 15083 corp: 15/808b lim: 100 exec/s: 28 rss: 74Mb L: 33/92 MS: 1 ChangeBinInt- 00:10:45.204 [2024-10-15 11:10:25.802364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.204 [2024-10-15 11:10:25.802393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.204 #34 NEW cov: 12520 ft: 15187 corp: 16/841b lim: 100 exec/s: 34 rss: 74Mb L: 33/92 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:10:45.463 [2024-10-15 11:10:25.853460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.853491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.853564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.853582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.853657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.853674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.853762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.853781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.463 #35 NEW cov: 12520 ft: 15213 corp: 17/924b lim: 100 exec/s: 35 rss: 74Mb L: 83/92 MS: 1 ChangeBit- 00:10:45.463 [2024-10-15 11:10:25.923805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.923834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.923899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.923928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.924008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.924033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:25.924124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5714873654208057167 len:20304 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.924145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.463 #36 NEW cov: 12520 ft: 15263 corp: 18/1008b lim: 100 exec/s: 36 rss: 74Mb L: 84/92 MS: 1 InsertRepeatedBytes- 00:10:45.463 [2024-10-15 11:10:25.973235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:25.973265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.463 #37 NEW cov: 12520 ft: 15294 corp: 19/1041b lim: 100 exec/s: 37 rss: 74Mb L: 33/92 MS: 1 CopyPart- 00:10:45.463 [2024-10-15 11:10:26.044637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:26.044667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:26.044736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:26.044753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:26.044831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157343645319822331 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:26.044851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.463 [2024-10-15 11:10:26.044938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15553137160186484695 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.463 [2024-10-15 11:10:26.044958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.463 #38 NEW cov: 12520 ft: 15322 corp: 20/1137b lim: 100 exec/s: 38 rss: 74Mb L: 96/96 MS: 1 CrossOver- 00:10:45.722 [2024-10-15 11:10:26.113742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.113772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.723 #39 NEW cov: 12520 ft: 15332 corp: 21/1170b lim: 100 exec/s: 39 rss: 74Mb L: 33/96 MS: 1 ChangeBinInt- 00:10:45.723 [2024-10-15 11:10:26.165031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.165060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.723 [2024-10-15 11:10:26.165149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.165170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.723 [2024-10-15 11:10:26.165224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157343645319822331 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.165243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.723 [2024-10-15 11:10:26.165336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15553137160186484695 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.165354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.723 #40 NEW cov: 12520 ft: 15353 corp: 22/1266b lim: 100 exec/s: 40 rss: 74Mb L: 96/96 MS: 1 ChangeBinInt- 00:10:45.723 [2024-10-15 11:10:26.234717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.234748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.723 [2024-10-15 11:10:26.234816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.234836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.723 #41 NEW cov: 12520 ft: 15692 corp: 23/1325b lim: 100 exec/s: 41 rss: 74Mb L: 59/96 MS: 1 EraseBytes- 00:10:45.723 [2024-10-15 11:10:26.304940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.723 [2024-10-15 11:10:26.304969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.723 #42 NEW cov: 12520 ft: 15700 corp: 24/1358b lim: 100 exec/s: 42 rss: 74Mb L: 33/96 MS: 1 CMP- DE: "\000\000\000\000\000\000\004\000"- 00:10:45.982 [2024-10-15 11:10:26.375799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.375830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.375903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382355999739 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.375921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.375984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.376003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.982 #43 NEW cov: 12520 ft: 15718 corp: 25/1420b lim: 100 exec/s: 43 rss: 74Mb L: 62/96 MS: 1 ChangeByte- 00:10:45.982 [2024-10-15 11:10:26.436753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.436783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.436860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.436880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.436952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157343645319822331 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.436971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.437058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15553137160186484695 len:55256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.437077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.982 #44 NEW cov: 12520 ft: 15734 corp: 26/1512b lim: 100 exec/s: 44 rss: 74Mb L: 92/96 MS: 1 ShuffleBytes- 00:10:45.982 [2024-10-15 11:10:26.487077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.487104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.487192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:277059682893824 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.487210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.487305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.487325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.487414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.487432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:45.982 #45 NEW cov: 12520 ft: 15794 corp: 27/1600b lim: 100 exec/s: 45 rss: 74Mb L: 88/96 MS: 1 CrossOver- 00:10:45.982 [2024-10-15 11:10:26.537048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.537076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.982 [2024-10-15 11:10:26.537149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382355999739 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.982 [2024-10-15 11:10:26.537166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.983 [2024-10-15 11:10:26.537255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.983 [2024-10-15 11:10:26.537274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.983 #46 NEW cov: 12520 ft: 15800 corp: 28/1662b lim: 100 exec/s: 46 rss: 74Mb L: 62/96 MS: 1 ChangeBit- 00:10:45.983 [2024-10-15 11:10:26.607527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.983 [2024-10-15 11:10:26.607556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:45.983 [2024-10-15 11:10:26.607634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.983 [2024-10-15 11:10:26.607652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:45.983 [2024-10-15 11:10:26.607725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.983 [2024-10-15 11:10:26.607741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:45.983 [2024-10-15 11:10:26.607834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:45.983 [2024-10-15 11:10:26.607851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:46.242 #47 NEW cov: 12520 ft: 15802 corp: 29/1745b lim: 100 exec/s: 47 rss: 74Mb L: 83/96 MS: 1 ShuffleBytes- 00:10:46.242 [2024-10-15 11:10:26.677917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:46.242 [2024-10-15 11:10:26.677944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:46.242 [2024-10-15 11:10:26.678031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:46.242 [2024-10-15 11:10:26.678049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:46.242 [2024-10-15 11:10:26.678133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:46.242 [2024-10-15 11:10:26.678152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:46.242 [2024-10-15 11:10:26.678242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18119108304557571067 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:46.242 [2024-10-15 11:10:26.678259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:46.242 #48 NEW cov: 12520 ft: 15822 corp: 30/1832b lim: 100 exec/s: 24 rss: 74Mb L: 87/96 MS: 1 CMP- DE: "t\001\000\000"- 00:10:46.242 #48 DONE cov: 12520 ft: 15822 corp: 30/1832b lim: 100 exec/s: 24 rss: 74Mb 00:10:46.242 ###### Recommended dictionary. ###### 00:10:46.242 "\000\000\000\000" # Uses: 3 00:10:46.242 "\000\000\000\000\000\000\004\000" # Uses: 0 00:10:46.242 "t\001\000\000" # Uses: 0 00:10:46.242 ###### End of recommended dictionary. ###### 00:10:46.242 Done 48 runs in 2 second(s) 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:10:46.242 00:10:46.242 real 1m3.736s 00:10:46.242 user 1m39.785s 00:10:46.242 sys 0m7.586s 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.242 11:10:26 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.242 ************************************ 00:10:46.242 END TEST nvmf_llvm_fuzz 00:10:46.242 ************************************ 00:10:46.242 11:10:26 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:10:46.242 11:10:26 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:10:46.242 11:10:26 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:10:46.242 11:10:26 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:46.242 11:10:26 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.242 11:10:26 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 ************************************ 00:10:46.503 START TEST vfio_llvm_fuzz 00:10:46.503 ************************************ 00:10:46.503 11:10:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:10:46.503 * Looking for test storage... 00:10:46.503 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.503 11:10:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.503 11:10:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.503 11:10:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.503 --rc genhtml_branch_coverage=1 00:10:46.503 --rc genhtml_function_coverage=1 00:10:46.503 --rc genhtml_legend=1 00:10:46.503 --rc geninfo_all_blocks=1 00:10:46.503 --rc geninfo_unexecuted_blocks=1 00:10:46.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.503 ' 00:10:46.503 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.503 --rc genhtml_branch_coverage=1 00:10:46.503 --rc genhtml_function_coverage=1 00:10:46.503 --rc genhtml_legend=1 00:10:46.504 --rc geninfo_all_blocks=1 00:10:46.504 --rc geninfo_unexecuted_blocks=1 00:10:46.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.504 ' 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.504 --rc genhtml_branch_coverage=1 00:10:46.504 --rc genhtml_function_coverage=1 00:10:46.504 --rc genhtml_legend=1 00:10:46.504 --rc geninfo_all_blocks=1 00:10:46.504 --rc geninfo_unexecuted_blocks=1 00:10:46.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.504 ' 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.504 --rc genhtml_branch_coverage=1 00:10:46.504 --rc genhtml_function_coverage=1 00:10:46.504 --rc genhtml_legend=1 00:10:46.504 --rc geninfo_all_blocks=1 00:10:46.504 --rc geninfo_unexecuted_blocks=1 00:10:46.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.504 ' 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:46.504 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:46.505 #define SPDK_CONFIG_H 00:10:46.505 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:46.505 #define SPDK_CONFIG_APPS 1 00:10:46.505 #define SPDK_CONFIG_ARCH native 00:10:46.505 #undef SPDK_CONFIG_ASAN 00:10:46.505 #undef SPDK_CONFIG_AVAHI 00:10:46.505 #undef SPDK_CONFIG_CET 00:10:46.505 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:46.505 #define SPDK_CONFIG_COVERAGE 1 00:10:46.505 #define SPDK_CONFIG_CROSS_PREFIX 00:10:46.505 #undef SPDK_CONFIG_CRYPTO 00:10:46.505 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:46.505 #undef SPDK_CONFIG_CUSTOMOCF 00:10:46.505 #undef SPDK_CONFIG_DAOS 00:10:46.505 #define SPDK_CONFIG_DAOS_DIR 00:10:46.505 #define SPDK_CONFIG_DEBUG 1 00:10:46.505 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:46.505 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:46.505 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:46.505 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:46.505 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:46.505 #undef SPDK_CONFIG_DPDK_UADK 00:10:46.505 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:46.505 #define SPDK_CONFIG_EXAMPLES 1 00:10:46.505 #undef SPDK_CONFIG_FC 00:10:46.505 #define SPDK_CONFIG_FC_PATH 00:10:46.505 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:46.505 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:46.505 #define SPDK_CONFIG_FSDEV 1 00:10:46.505 #undef SPDK_CONFIG_FUSE 00:10:46.505 #define SPDK_CONFIG_FUZZER 1 00:10:46.505 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:10:46.505 #undef SPDK_CONFIG_GOLANG 00:10:46.505 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:46.505 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:46.505 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:46.505 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:46.505 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:46.505 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:46.505 #undef SPDK_CONFIG_HAVE_LZ4 00:10:46.505 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:46.505 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:46.505 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:46.505 #define SPDK_CONFIG_IDXD 1 00:10:46.505 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:46.505 #undef SPDK_CONFIG_IPSEC_MB 00:10:46.505 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:46.505 #define SPDK_CONFIG_ISAL 1 00:10:46.505 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:46.505 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:46.505 #define SPDK_CONFIG_LIBDIR 00:10:46.505 #undef SPDK_CONFIG_LTO 00:10:46.505 #define SPDK_CONFIG_MAX_LCORES 128 00:10:46.505 #define SPDK_CONFIG_NVME_CUSE 1 00:10:46.505 #undef SPDK_CONFIG_OCF 00:10:46.505 #define SPDK_CONFIG_OCF_PATH 00:10:46.505 #define SPDK_CONFIG_OPENSSL_PATH 00:10:46.505 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:46.505 #define SPDK_CONFIG_PGO_DIR 00:10:46.505 #undef SPDK_CONFIG_PGO_USE 00:10:46.505 #define SPDK_CONFIG_PREFIX /usr/local 00:10:46.505 #undef SPDK_CONFIG_RAID5F 00:10:46.505 #undef SPDK_CONFIG_RBD 00:10:46.505 #define SPDK_CONFIG_RDMA 1 00:10:46.505 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:46.505 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:46.505 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:46.505 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:46.505 #undef SPDK_CONFIG_SHARED 00:10:46.505 #undef SPDK_CONFIG_SMA 00:10:46.505 #define SPDK_CONFIG_TESTS 1 00:10:46.505 #undef SPDK_CONFIG_TSAN 00:10:46.505 #define SPDK_CONFIG_UBLK 1 00:10:46.505 #define SPDK_CONFIG_UBSAN 1 00:10:46.505 #undef SPDK_CONFIG_UNIT_TESTS 00:10:46.505 #undef SPDK_CONFIG_URING 00:10:46.505 #define SPDK_CONFIG_URING_PATH 00:10:46.505 #undef SPDK_CONFIG_URING_ZNS 00:10:46.505 #undef SPDK_CONFIG_USDT 00:10:46.505 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:46.505 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:46.505 #define SPDK_CONFIG_VFIO_USER 1 00:10:46.505 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:46.505 #define SPDK_CONFIG_VHOST 1 00:10:46.505 #define SPDK_CONFIG_VIRTIO 1 00:10:46.505 #undef SPDK_CONFIG_VTUNE 00:10:46.505 #define SPDK_CONFIG_VTUNE_DIR 00:10:46.505 #define SPDK_CONFIG_WERROR 1 00:10:46.505 #define SPDK_CONFIG_WPDK_DIR 00:10:46.505 #undef SPDK_CONFIG_XNVME 00:10:46.505 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:46.505 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:10:46.768 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3722367 ]] 00:10:46.769 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3722367 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.pTlwEm 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.pTlwEm/tests/vfio /tmp/spdk.pTlwEm 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86816407552 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500372480 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7683964928 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245422592 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894340096 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5734400 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249801216 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=385024 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450024960 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450037248 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:46.770 * Looking for test storage... 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86816407552 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9898557440 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.770 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.770 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.771 --rc genhtml_branch_coverage=1 00:10:46.771 --rc genhtml_function_coverage=1 00:10:46.771 --rc genhtml_legend=1 00:10:46.771 --rc geninfo_all_blocks=1 00:10:46.771 --rc geninfo_unexecuted_blocks=1 00:10:46.771 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.771 ' 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.771 --rc genhtml_branch_coverage=1 00:10:46.771 --rc genhtml_function_coverage=1 00:10:46.771 --rc genhtml_legend=1 00:10:46.771 --rc geninfo_all_blocks=1 00:10:46.771 --rc geninfo_unexecuted_blocks=1 00:10:46.771 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.771 ' 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.771 --rc genhtml_branch_coverage=1 00:10:46.771 --rc genhtml_function_coverage=1 00:10:46.771 --rc genhtml_legend=1 00:10:46.771 --rc geninfo_all_blocks=1 00:10:46.771 --rc geninfo_unexecuted_blocks=1 00:10:46.771 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.771 ' 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.771 --rc genhtml_branch_coverage=1 00:10:46.771 --rc genhtml_function_coverage=1 00:10:46.771 --rc genhtml_legend=1 00:10:46.771 --rc geninfo_all_blocks=1 00:10:46.771 --rc geninfo_unexecuted_blocks=1 00:10:46.771 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:46.771 ' 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:10:46.771 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:46.771 11:10:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:10:46.771 [2024-10-15 11:10:27.381436] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:46.771 [2024-10-15 11:10:27.381508] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722509 ] 00:10:47.030 [2024-10-15 11:10:27.453377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.030 [2024-10-15 11:10:27.498198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.289 INFO: Running with entropic power schedule (0xFF, 100). 00:10:47.289 INFO: Seed: 1439802915 00:10:47.289 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:10:47.289 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:10:47.289 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:47.289 INFO: A corpus is not provided, starting from an empty corpus 00:10:47.289 #2 INITED exec/s: 0 rss: 67Mb 00:10:47.289 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:47.289 This may also happen if the target rejected all inputs we tried so far 00:10:47.289 [2024-10-15 11:10:27.738912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:10:47.806 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:10:47.806 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:47.806 #5 NEW cov: 11154 ft: 11117 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 3 CopyPart-InsertRepeatedBytes-InsertByte- 00:10:47.806 #19 NEW cov: 11168 ft: 14111 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 CMP-CopyPart-ShuffleBytes-InsertByte- DE: "\001\002"- 00:10:48.064 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:48.064 #25 NEW cov: 11185 ft: 14998 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:10:48.322 #26 NEW cov: 11185 ft: 16192 corp: 5/25b lim: 6 exec/s: 26 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:10:48.580 #27 NEW cov: 11185 ft: 16969 corp: 6/31b lim: 6 exec/s: 27 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:10:48.838 #28 NEW cov: 11185 ft: 17215 corp: 7/37b lim: 6 exec/s: 28 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:10:48.838 #29 NEW cov: 11185 ft: 17881 corp: 8/43b lim: 6 exec/s: 29 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:10:49.095 #30 NEW cov: 11192 ft: 18057 corp: 9/49b lim: 6 exec/s: 30 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:10:49.353 #31 NEW cov: 11192 ft: 18410 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:10:49.353 #31 DONE cov: 11192 ft: 18410 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb 00:10:49.353 ###### Recommended dictionary. ###### 00:10:49.353 "\001\002" # Uses: 0 00:10:49.353 ###### End of recommended dictionary. ###### 00:10:49.353 Done 31 runs in 2 second(s) 00:10:49.353 [2024-10-15 11:10:29.834226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:10:49.613 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:49.613 11:10:30 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:10:49.613 [2024-10-15 11:10:30.101573] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:49.613 [2024-10-15 11:10:30.101646] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722897 ] 00:10:49.613 [2024-10-15 11:10:30.168614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.613 [2024-10-15 11:10:30.214982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.871 INFO: Running with entropic power schedule (0xFF, 100). 00:10:49.871 INFO: Seed: 4159796873 00:10:49.871 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:10:49.871 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:10:49.871 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:49.871 INFO: A corpus is not provided, starting from an empty corpus 00:10:49.871 #2 INITED exec/s: 0 rss: 68Mb 00:10:49.871 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:49.871 This may also happen if the target rejected all inputs we tried so far 00:10:49.871 [2024-10-15 11:10:30.455172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:10:50.130 [2024-10-15 11:10:30.511071] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:50.130 [2024-10-15 11:10:30.511096] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:50.130 [2024-10-15 11:10:30.511114] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:50.388 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:10:50.388 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:50.388 #5 NEW cov: 11145 ft: 10979 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 3 CopyPart-CopyPart-CopyPart- 00:10:50.388 [2024-10-15 11:10:31.017081] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:50.388 [2024-10-15 11:10:31.017118] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:50.388 [2024-10-15 11:10:31.017138] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:50.646 #11 NEW cov: 11161 ft: 13633 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CMP- DE: "\377\377"- 00:10:50.646 [2024-10-15 11:10:31.221682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:50.646 [2024-10-15 11:10:31.221707] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:50.646 [2024-10-15 11:10:31.221726] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:50.904 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:50.904 #17 NEW cov: 11181 ft: 15043 corp: 4/13b lim: 4 exec/s: 0 rss: 77Mb L: 4/4 MS: 1 ChangeBinInt- 00:10:50.904 [2024-10-15 11:10:31.433328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:50.904 [2024-10-15 11:10:31.433351] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:50.904 [2024-10-15 11:10:31.433385] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:51.161 #18 NEW cov: 11181 ft: 16006 corp: 5/17b lim: 4 exec/s: 18 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:10:51.161 [2024-10-15 11:10:31.635867] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:51.161 [2024-10-15 11:10:31.635890] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:51.161 [2024-10-15 11:10:31.635908] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:51.161 #19 NEW cov: 11181 ft: 16365 corp: 6/21b lim: 4 exec/s: 19 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:10:51.419 [2024-10-15 11:10:31.846706] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:51.419 [2024-10-15 11:10:31.846732] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:51.419 [2024-10-15 11:10:31.846750] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:51.419 #20 NEW cov: 11181 ft: 16634 corp: 7/25b lim: 4 exec/s: 20 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:10:51.419 [2024-10-15 11:10:32.048227] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:51.419 [2024-10-15 11:10:32.048250] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:51.419 [2024-10-15 11:10:32.048267] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:51.677 #21 NEW cov: 11181 ft: 17055 corp: 8/29b lim: 4 exec/s: 21 rss: 77Mb L: 4/4 MS: 1 PersAutoDict- DE: "\377\377"- 00:10:51.677 [2024-10-15 11:10:32.249396] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:51.677 [2024-10-15 11:10:32.249418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:51.677 [2024-10-15 11:10:32.249451] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:51.935 #25 NEW cov: 11188 ft: 17238 corp: 9/33b lim: 4 exec/s: 25 rss: 77Mb L: 4/4 MS: 4 ShuffleBytes-ChangeBit-CMP-InsertByte- DE: "\000\000"- 00:10:51.935 [2024-10-15 11:10:32.462400] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:51.935 [2024-10-15 11:10:32.462423] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:51.935 [2024-10-15 11:10:32.462456] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:52.193 #26 NEW cov: 11188 ft: 17448 corp: 10/37b lim: 4 exec/s: 13 rss: 77Mb L: 4/4 MS: 1 PersAutoDict- DE: "\000\000"- 00:10:52.193 #26 DONE cov: 11188 ft: 17448 corp: 10/37b lim: 4 exec/s: 13 rss: 77Mb 00:10:52.193 ###### Recommended dictionary. ###### 00:10:52.193 "\377\377" # Uses: 2 00:10:52.193 "\000\000" # Uses: 1 00:10:52.193 ###### End of recommended dictionary. ###### 00:10:52.193 Done 26 runs in 2 second(s) 00:10:52.193 [2024-10-15 11:10:32.603238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:10:52.452 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:52.452 11:10:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:10:52.452 [2024-10-15 11:10:32.869808] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:52.452 [2024-10-15 11:10:32.869881] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723310 ] 00:10:52.452 [2024-10-15 11:10:32.939510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.452 [2024-10-15 11:10:32.983772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.710 INFO: Running with entropic power schedule (0xFF, 100). 00:10:52.710 INFO: Seed: 2629823396 00:10:52.710 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:10:52.710 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:10:52.710 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:52.710 INFO: A corpus is not provided, starting from an empty corpus 00:10:52.710 #2 INITED exec/s: 0 rss: 67Mb 00:10:52.711 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:52.711 This may also happen if the target rejected all inputs we tried so far 00:10:52.711 [2024-10-15 11:10:33.221821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:10:52.711 [2024-10-15 11:10:33.277434] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:53.227 NEW_FUNC[1/671]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:10:53.227 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:53.227 #12 NEW cov: 11127 ft: 11017 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 CrossOver-InsertByte-CrossOver-InsertRepeatedBytes-CrossOver- 00:10:53.227 [2024-10-15 11:10:33.773247] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:53.484 NEW_FUNC[1/1]: 0x194c818 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1161 00:10:53.484 #23 NEW cov: 11147 ft: 13907 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:10:53.484 [2024-10-15 11:10:33.965062] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:53.484 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:53.484 #24 NEW cov: 11164 ft: 15665 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:10:53.742 [2024-10-15 11:10:34.166806] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:53.742 #26 NEW cov: 11164 ft: 16637 corp: 5/33b lim: 8 exec/s: 26 rss: 76Mb L: 8/8 MS: 2 ChangeByte-CrossOver- 00:10:53.742 [2024-10-15 11:10:34.368778] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:54.000 #32 NEW cov: 11164 ft: 16984 corp: 6/41b lim: 8 exec/s: 32 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:10:54.000 [2024-10-15 11:10:34.568568] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:54.257 #33 NEW cov: 11164 ft: 17022 corp: 7/49b lim: 8 exec/s: 33 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:10:54.257 [2024-10-15 11:10:34.759514] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:54.257 #34 NEW cov: 11164 ft: 17167 corp: 8/57b lim: 8 exec/s: 34 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:10:54.515 [2024-10-15 11:10:34.952773] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:54.515 #40 NEW cov: 11171 ft: 17722 corp: 9/65b lim: 8 exec/s: 40 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:10:54.515 [2024-10-15 11:10:35.145351] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:54.774 #50 NEW cov: 11171 ft: 17771 corp: 10/73b lim: 8 exec/s: 25 rss: 77Mb L: 8/8 MS: 5 EraseBytes-CrossOver-ShuffleBytes-ChangeByte-InsertByte- 00:10:54.774 #50 DONE cov: 11171 ft: 17771 corp: 10/73b lim: 8 exec/s: 25 rss: 77Mb 00:10:54.774 Done 50 runs in 2 second(s) 00:10:54.774 [2024-10-15 11:10:35.285242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:10:55.032 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:55.032 11:10:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:10:55.032 [2024-10-15 11:10:35.551442] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:55.032 [2024-10-15 11:10:35.551516] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723664 ] 00:10:55.032 [2024-10-15 11:10:35.622595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.291 [2024-10-15 11:10:35.669089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.291 INFO: Running with entropic power schedule (0xFF, 100). 00:10:55.291 INFO: Seed: 1022843113 00:10:55.291 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:10:55.291 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:10:55.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:55.291 INFO: A corpus is not provided, starting from an empty corpus 00:10:55.291 #2 INITED exec/s: 0 rss: 68Mb 00:10:55.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:55.291 This may also happen if the target rejected all inputs we tried so far 00:10:55.550 [2024-10-15 11:10:35.922351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:10:55.808 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:10:55.808 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:55.808 #74 NEW cov: 11137 ft: 10868 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 InsertByte-InsertRepeatedBytes- 00:10:56.066 [2024-10-15 11:10:36.482826] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0 prot=0x3: Invalid argument 00:10:56.066 [2024-10-15 11:10:36.482869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:10:56.066 [2024-10-15 11:10:36.482880] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:56.066 [2024-10-15 11:10:36.482914] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:56.066 NEW_FUNC[1/1]: 0x154e548 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3094 00:10:56.066 #75 NEW cov: 11162 ft: 14502 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:10:56.066 [2024-10-15 11:10:36.696246] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x2000000000000000, 0x2000000000000000) fd=302 offset=0 prot=0x3: Invalid argument 00:10:56.067 [2024-10-15 11:10:36.696280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x2000000000000000, 0x2000000000000000) offset=0 flags=0x3: Invalid argument 00:10:56.067 [2024-10-15 11:10:36.696291] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:56.067 [2024-10-15 11:10:36.696310] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:56.325 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:56.325 #76 NEW cov: 11179 ft: 15901 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:56.583 #77 NEW cov: 11179 ft: 16201 corp: 5/129b lim: 32 exec/s: 77 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:10:56.841 #78 NEW cov: 11179 ft: 16404 corp: 6/161b lim: 32 exec/s: 78 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:10:56.841 #89 NEW cov: 11179 ft: 17042 corp: 7/193b lim: 32 exec/s: 89 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:10:57.098 #95 NEW cov: 11179 ft: 17974 corp: 8/225b lim: 32 exec/s: 95 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:57.356 #96 NEW cov: 11186 ft: 18250 corp: 9/257b lim: 32 exec/s: 96 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:57.614 #97 NEW cov: 11186 ft: 18331 corp: 10/289b lim: 32 exec/s: 48 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000"- 00:10:57.614 #97 DONE cov: 11186 ft: 18331 corp: 10/289b lim: 32 exec/s: 48 rss: 75Mb 00:10:57.614 ###### Recommended dictionary. ###### 00:10:57.614 "\001\000\000\000" # Uses: 0 00:10:57.614 ###### End of recommended dictionary. ###### 00:10:57.614 Done 97 runs in 2 second(s) 00:10:57.614 [2024-10-15 11:10:38.079242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:10:57.873 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:57.873 11:10:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:10:57.873 [2024-10-15 11:10:38.346906] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:10:57.873 [2024-10-15 11:10:38.346975] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724028 ] 00:10:57.873 [2024-10-15 11:10:38.417748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.873 [2024-10-15 11:10:38.461897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.132 INFO: Running with entropic power schedule (0xFF, 100). 00:10:58.132 INFO: Seed: 3813848864 00:10:58.132 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:10:58.132 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:10:58.132 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:58.132 INFO: A corpus is not provided, starting from an empty corpus 00:10:58.132 #2 INITED exec/s: 0 rss: 68Mb 00:10:58.132 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:58.132 This may also happen if the target rejected all inputs we tried so far 00:10:58.132 [2024-10-15 11:10:38.700805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:10:58.648 NEW_FUNC[1/672]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:10:58.648 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:58.648 #432 NEW cov: 11140 ft: 11048 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 CopyPart-ShuffleBytes-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes- 00:10:58.906 #433 NEW cov: 11154 ft: 13972 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:10:59.164 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:59.164 #434 NEW cov: 11171 ft: 14911 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:10:59.164 #435 NEW cov: 11174 ft: 15541 corp: 5/129b lim: 32 exec/s: 435 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:59.422 #436 NEW cov: 11174 ft: 16324 corp: 6/161b lim: 32 exec/s: 436 rss: 76Mb L: 32/32 MS: 1 CMP- DE: "\377\004"- 00:10:59.681 #442 NEW cov: 11174 ft: 16418 corp: 7/193b lim: 32 exec/s: 442 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:10:59.681 #443 NEW cov: 11174 ft: 17446 corp: 8/225b lim: 32 exec/s: 443 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:10:59.939 #449 NEW cov: 11181 ft: 17948 corp: 9/257b lim: 32 exec/s: 449 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:11:00.197 #450 NEW cov: 11181 ft: 18051 corp: 10/289b lim: 32 exec/s: 225 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:11:00.197 #450 DONE cov: 11181 ft: 18051 corp: 10/289b lim: 32 exec/s: 225 rss: 77Mb 00:11:00.197 ###### Recommended dictionary. ###### 00:11:00.197 "\377\004" # Uses: 1 00:11:00.197 ###### End of recommended dictionary. ###### 00:11:00.197 Done 450 runs in 2 second(s) 00:11:00.197 [2024-10-15 11:10:40.708239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:11:00.456 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:11:00.456 11:10:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:11:00.456 [2024-10-15 11:10:40.977973] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:11:00.456 [2024-10-15 11:10:40.978078] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724388 ] 00:11:00.456 [2024-10-15 11:10:41.049690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.715 [2024-10-15 11:10:41.096116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.715 INFO: Running with entropic power schedule (0xFF, 100). 00:11:00.715 INFO: Seed: 2150889936 00:11:00.715 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:11:00.715 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:11:00.715 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:11:00.715 INFO: A corpus is not provided, starting from an empty corpus 00:11:00.715 #2 INITED exec/s: 0 rss: 67Mb 00:11:00.715 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:00.715 This may also happen if the target rejected all inputs we tried so far 00:11:00.715 [2024-10-15 11:10:41.332037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:11:00.973 [2024-10-15 11:10:41.388063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:00.973 [2024-10-15 11:10:41.388097] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:01.232 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:11:01.232 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:11:01.232 #59 NEW cov: 11152 ft: 10793 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 2 CMP-CopyPart- DE: "\001\000\000\000\000\000\0005"- 00:11:01.490 [2024-10-15 11:10:41.871306] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:01.490 [2024-10-15 11:10:41.871351] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:01.490 #80 NEW cov: 11166 ft: 13889 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:11:01.490 [2024-10-15 11:10:42.052848] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:01.490 [2024-10-15 11:10:42.052879] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:01.747 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:01.747 #81 NEW cov: 11183 ft: 15185 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:11:01.747 [2024-10-15 11:10:42.235517] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:01.747 [2024-10-15 11:10:42.235547] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:01.747 #82 NEW cov: 11183 ft: 15782 corp: 5/53b lim: 13 exec/s: 82 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:11:02.005 [2024-10-15 11:10:42.416180] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.005 [2024-10-15 11:10:42.416211] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:02.005 #93 NEW cov: 11183 ft: 16185 corp: 6/66b lim: 13 exec/s: 93 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:11:02.005 [2024-10-15 11:10:42.597702] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.005 [2024-10-15 11:10:42.597734] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:02.264 #94 NEW cov: 11183 ft: 16305 corp: 7/79b lim: 13 exec/s: 94 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:11:02.264 [2024-10-15 11:10:42.778874] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.264 [2024-10-15 11:10:42.778904] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:02.264 #96 NEW cov: 11183 ft: 16554 corp: 8/92b lim: 13 exec/s: 96 rss: 76Mb L: 13/13 MS: 2 EraseBytes-InsertRepeatedBytes- 00:11:02.522 [2024-10-15 11:10:42.960404] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.522 [2024-10-15 11:10:42.960433] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:02.522 #97 NEW cov: 11183 ft: 16586 corp: 9/105b lim: 13 exec/s: 97 rss: 76Mb L: 13/13 MS: 1 ChangeASCIIInt- 00:11:02.522 [2024-10-15 11:10:43.141414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.522 [2024-10-15 11:10:43.141444] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:02.779 #108 NEW cov: 11190 ft: 16647 corp: 10/118b lim: 13 exec/s: 108 rss: 76Mb L: 13/13 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\0005"- 00:11:02.779 [2024-10-15 11:10:43.323292] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:02.779 [2024-10-15 11:10:43.323329] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:03.036 #114 NEW cov: 11190 ft: 16993 corp: 11/131b lim: 13 exec/s: 57 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:11:03.036 #114 DONE cov: 11190 ft: 16993 corp: 11/131b lim: 13 exec/s: 57 rss: 76Mb 00:11:03.036 ###### Recommended dictionary. ###### 00:11:03.036 "\001\000\000\000\000\000\0005" # Uses: 6 00:11:03.036 ###### End of recommended dictionary. ###### 00:11:03.036 Done 114 runs in 2 second(s) 00:11:03.036 [2024-10-15 11:10:43.452232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:11:03.295 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:11:03.295 11:10:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:11:03.295 [2024-10-15 11:10:43.721076] Starting SPDK v25.01-pre git sha1 35c8daa94 / DPDK 24.03.0 initialization... 00:11:03.295 [2024-10-15 11:10:43.721170] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724743 ] 00:11:03.295 [2024-10-15 11:10:43.792982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.295 [2024-10-15 11:10:43.836831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.552 INFO: Running with entropic power schedule (0xFF, 100). 00:11:03.552 INFO: Seed: 597930029 00:11:03.552 INFO: Loaded 1 modules (382472 inline 8-bit counters): 382472 [0x2bc1f4c, 0x2c1f554), 00:11:03.552 INFO: Loaded 1 PC tables (382472 PCs): 382472 [0x2c1f558,0x31f55d8), 00:11:03.552 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:11:03.552 INFO: A corpus is not provided, starting from an empty corpus 00:11:03.552 #2 INITED exec/s: 0 rss: 67Mb 00:11:03.552 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:03.552 This may also happen if the target rejected all inputs we tried so far 00:11:03.552 [2024-10-15 11:10:44.080321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:11:03.552 [2024-10-15 11:10:44.132083] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:03.552 [2024-10-15 11:10:44.132115] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:04.068 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:11:04.068 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:11:04.068 #20 NEW cov: 11140 ft: 11072 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 3 InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes- 00:11:04.068 [2024-10-15 11:10:44.642645] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:04.068 [2024-10-15 11:10:44.642695] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:04.325 #21 NEW cov: 11154 ft: 14310 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:11:04.325 [2024-10-15 11:10:44.845574] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:04.326 [2024-10-15 11:10:44.845607] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:04.583 NEW_FUNC[1/1]: 0x1bd39d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:04.583 #22 NEW cov: 11171 ft: 15859 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:11:04.583 [2024-10-15 11:10:45.056974] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:04.583 [2024-10-15 11:10:45.057005] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:04.583 #23 NEW cov: 11171 ft: 16343 corp: 5/37b lim: 9 exec/s: 23 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:11:04.841 [2024-10-15 11:10:45.257529] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:04.841 [2024-10-15 11:10:45.257560] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:04.841 #24 NEW cov: 11171 ft: 16780 corp: 6/46b lim: 9 exec/s: 24 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:11:04.841 [2024-10-15 11:10:45.458123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:04.841 [2024-10-15 11:10:45.458153] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:05.098 #30 NEW cov: 11171 ft: 17012 corp: 7/55b lim: 9 exec/s: 30 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:11:05.098 [2024-10-15 11:10:45.659978] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:05.098 [2024-10-15 11:10:45.660009] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:05.356 #33 NEW cov: 11171 ft: 17163 corp: 8/64b lim: 9 exec/s: 33 rss: 76Mb L: 9/9 MS: 3 CrossOver-CrossOver-InsertByte- 00:11:05.356 [2024-10-15 11:10:45.861999] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:05.356 [2024-10-15 11:10:45.862031] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:05.356 #34 NEW cov: 11178 ft: 17392 corp: 9/73b lim: 9 exec/s: 34 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:11:05.614 [2024-10-15 11:10:46.065866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:11:05.614 [2024-10-15 11:10:46.065896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:11:05.614 #35 NEW cov: 11178 ft: 17805 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:11:05.614 #35 DONE cov: 11178 ft: 17805 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb 00:11:05.614 Done 35 runs in 2 second(s) 00:11:05.614 [2024-10-15 11:10:46.210232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:11:05.872 00:11:05.872 real 0m19.547s 00:11:05.872 user 0m27.954s 00:11:05.872 sys 0m1.800s 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.872 11:10:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:05.872 ************************************ 00:11:05.872 END TEST vfio_llvm_fuzz 00:11:05.872 ************************************ 00:11:05.872 00:11:05.872 real 1m23.626s 00:11:05.872 user 2m7.899s 00:11:05.872 sys 0m9.598s 00:11:05.872 11:10:46 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.872 11:10:46 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:05.872 ************************************ 00:11:05.872 END TEST llvm_fuzz 00:11:05.872 ************************************ 00:11:06.130 11:10:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:11:06.130 11:10:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:11:06.130 11:10:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:11:06.130 11:10:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.130 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:11:06.130 11:10:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:11:06.130 11:10:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:11:06.130 11:10:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:11:06.130 11:10:46 -- common/autotest_common.sh@10 -- # set +x 00:11:10.314 INFO: APP EXITING 00:11:10.314 INFO: killing all VMs 00:11:10.314 INFO: killing vhost app 00:11:10.314 INFO: EXIT DONE 00:11:12.850 Waiting for block devices as requested 00:11:12.850 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:11:12.850 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:11:12.850 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:11:12.850 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:11:13.110 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:11:13.110 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:11:13.110 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:11:13.370 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:11:13.370 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:11:13.370 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:11:13.629 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:11:13.629 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:11:13.629 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:11:13.889 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:11:13.889 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:11:13.889 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:11:14.148 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:11:17.440 Cleaning 00:11:17.440 Removing: /dev/shm/spdk_tgt_trace.pid3702836 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3700497 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3701640 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3702836 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3703317 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3704104 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3704126 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3704935 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3705047 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3705393 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3705632 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3705865 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3706119 00:11:17.440 Removing: /var/run/dpdk/spdk_pid3706357 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3706552 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3706738 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3706975 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3707447 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3710025 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3710270 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3710699 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3710776 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711212 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711354 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711692 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711769 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711984 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3711992 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3712194 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3712221 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3712660 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3712852 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3713046 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3713286 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3713859 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3714228 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3714567 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3714882 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3715225 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3715547 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3715869 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3716219 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3716572 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3716938 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3717289 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3717649 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3718005 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3718339 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3718611 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3718922 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3719273 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3719626 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3719988 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3720341 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3720700 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3721064 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3721333 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3721620 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3721982 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3722509 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3722897 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3723310 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3723664 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3724028 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3724388 00:11:17.441 Removing: /var/run/dpdk/spdk_pid3724743 00:11:17.441 Clean 00:11:17.441 11:10:57 -- common/autotest_common.sh@1451 -- # return 0 00:11:17.441 11:10:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:11:17.441 11:10:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.441 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:11:17.441 11:10:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:11:17.441 11:10:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.441 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:11:17.441 11:10:57 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:11:17.441 11:10:57 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:11:17.441 11:10:57 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:11:17.441 11:10:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:11:17.441 11:10:57 -- spdk/autotest.sh@394 -- # hostname 00:11:17.442 11:10:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:11:17.703 geninfo: WARNING: invalid characters removed from testname! 00:11:24.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:11:25.646 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:11:28.928 11:11:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:37.044 11:11:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:42.311 11:11:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:47.571 11:11:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:54.134 11:11:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:58.356 11:11:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:12:03.763 11:11:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:12:03.763 11:11:44 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:12:03.763 11:11:44 -- common/autotest_common.sh@1691 -- $ lcov --version 00:12:03.763 11:11:44 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:12:04.023 11:11:44 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:12:04.023 11:11:44 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:12:04.023 11:11:44 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:12:04.023 11:11:44 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:12:04.023 11:11:44 -- scripts/common.sh@336 -- $ IFS=.-: 00:12:04.023 11:11:44 -- scripts/common.sh@336 -- $ read -ra ver1 00:12:04.023 11:11:44 -- scripts/common.sh@337 -- $ IFS=.-: 00:12:04.023 11:11:44 -- scripts/common.sh@337 -- $ read -ra ver2 00:12:04.023 11:11:44 -- scripts/common.sh@338 -- $ local 'op=<' 00:12:04.023 11:11:44 -- scripts/common.sh@340 -- $ ver1_l=2 00:12:04.023 11:11:44 -- scripts/common.sh@341 -- $ ver2_l=1 00:12:04.023 11:11:44 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:12:04.023 11:11:44 -- scripts/common.sh@344 -- $ case "$op" in 00:12:04.023 11:11:44 -- scripts/common.sh@345 -- $ : 1 00:12:04.023 11:11:44 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:12:04.023 11:11:44 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.023 11:11:44 -- scripts/common.sh@365 -- $ decimal 1 00:12:04.023 11:11:44 -- scripts/common.sh@353 -- $ local d=1 00:12:04.023 11:11:44 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:12:04.023 11:11:44 -- scripts/common.sh@355 -- $ echo 1 00:12:04.023 11:11:44 -- scripts/common.sh@365 -- $ ver1[v]=1 00:12:04.023 11:11:44 -- scripts/common.sh@366 -- $ decimal 2 00:12:04.023 11:11:44 -- scripts/common.sh@353 -- $ local d=2 00:12:04.023 11:11:44 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:12:04.023 11:11:44 -- scripts/common.sh@355 -- $ echo 2 00:12:04.023 11:11:44 -- scripts/common.sh@366 -- $ ver2[v]=2 00:12:04.023 11:11:44 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:12:04.023 11:11:44 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:12:04.023 11:11:44 -- scripts/common.sh@368 -- $ return 0 00:12:04.023 11:11:44 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.023 11:11:44 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:12:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.023 --rc genhtml_branch_coverage=1 00:12:04.023 --rc genhtml_function_coverage=1 00:12:04.023 --rc genhtml_legend=1 00:12:04.023 --rc geninfo_all_blocks=1 00:12:04.023 --rc geninfo_unexecuted_blocks=1 00:12:04.023 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:04.023 ' 00:12:04.023 11:11:44 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:12:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.023 --rc genhtml_branch_coverage=1 00:12:04.023 --rc genhtml_function_coverage=1 00:12:04.023 --rc genhtml_legend=1 00:12:04.023 --rc geninfo_all_blocks=1 00:12:04.023 --rc geninfo_unexecuted_blocks=1 00:12:04.023 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:04.023 ' 00:12:04.023 11:11:44 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:12:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.023 --rc genhtml_branch_coverage=1 00:12:04.023 --rc genhtml_function_coverage=1 00:12:04.023 --rc genhtml_legend=1 00:12:04.023 --rc geninfo_all_blocks=1 00:12:04.023 --rc geninfo_unexecuted_blocks=1 00:12:04.023 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:04.023 ' 00:12:04.023 11:11:44 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:12:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.023 --rc genhtml_branch_coverage=1 00:12:04.023 --rc genhtml_function_coverage=1 00:12:04.023 --rc genhtml_legend=1 00:12:04.023 --rc geninfo_all_blocks=1 00:12:04.023 --rc geninfo_unexecuted_blocks=1 00:12:04.023 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:04.023 ' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:12:04.023 11:11:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:12:04.023 11:11:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:04.023 11:11:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.023 11:11:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.023 11:11:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.023 11:11:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.023 11:11:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.023 11:11:44 -- paths/export.sh@5 -- $ export PATH 00:12:04.023 11:11:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.023 11:11:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:12:04.023 11:11:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:12:04.023 11:11:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728983504.XXXXXX 00:12:04.023 11:11:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728983504.zWIc8U 00:12:04.023 11:11:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:12:04.023 11:11:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:12:04.023 11:11:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:12:04.023 11:11:44 -- common/autotest_common.sh@10 -- $ set +x 00:12:04.023 11:11:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:12:04.023 11:11:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:12:04.023 11:11:44 -- pm/common@17 -- $ local monitor 00:12:04.023 11:11:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.023 11:11:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.023 11:11:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.023 11:11:44 -- pm/common@21 -- $ date +%s 00:12:04.023 11:11:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.023 11:11:44 -- pm/common@21 -- $ date +%s 00:12:04.023 11:11:44 -- pm/common@21 -- $ date +%s 00:12:04.023 11:11:44 -- pm/common@25 -- $ sleep 1 00:12:04.023 11:11:44 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728983504 00:12:04.023 11:11:44 -- pm/common@21 -- $ date +%s 00:12:04.023 11:11:44 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728983504 00:12:04.023 11:11:44 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728983504 00:12:04.023 11:11:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728983504 00:12:04.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728983504_collect-cpu-temp.pm.log 00:12:04.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728983504_collect-cpu-load.pm.log 00:12:04.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728983504_collect-vmstat.pm.log 00:12:04.024 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728983504_collect-bmc-pm.bmc.pm.log 00:12:04.960 11:11:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:12:04.960 11:11:45 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:12:04.960 11:11:45 -- spdk/autopackage.sh@14 -- $ timing_finish 00:12:04.960 11:11:45 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:12:04.960 11:11:45 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:12:04.960 11:11:45 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:12:04.960 11:11:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:12:04.960 11:11:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:12:04.960 11:11:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:12:04.960 11:11:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.960 11:11:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:12:04.960 11:11:45 -- pm/common@44 -- $ pid=3730694 00:12:04.960 11:11:45 -- pm/common@50 -- $ kill -TERM 3730694 00:12:04.960 11:11:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.960 11:11:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:12:04.960 11:11:45 -- pm/common@44 -- $ pid=3730696 00:12:04.960 11:11:45 -- pm/common@50 -- $ kill -TERM 3730696 00:12:04.960 11:11:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.960 11:11:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:12:04.960 11:11:45 -- pm/common@44 -- $ pid=3730698 00:12:04.960 11:11:45 -- pm/common@50 -- $ kill -TERM 3730698 00:12:04.960 11:11:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:04.960 11:11:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:12:04.960 11:11:45 -- pm/common@44 -- $ pid=3730726 00:12:04.960 11:11:45 -- pm/common@50 -- $ sudo -E kill -TERM 3730726 00:12:04.960 + [[ -n 3604412 ]] 00:12:04.960 + sudo kill 3604412 00:12:05.229 [Pipeline] } 00:12:05.244 [Pipeline] // stage 00:12:05.250 [Pipeline] } 00:12:05.264 [Pipeline] // timeout 00:12:05.269 [Pipeline] } 00:12:05.283 [Pipeline] // catchError 00:12:05.288 [Pipeline] } 00:12:05.302 [Pipeline] // wrap 00:12:05.309 [Pipeline] } 00:12:05.321 [Pipeline] // catchError 00:12:05.330 [Pipeline] stage 00:12:05.333 [Pipeline] { (Epilogue) 00:12:05.345 [Pipeline] catchError 00:12:05.347 [Pipeline] { 00:12:05.360 [Pipeline] echo 00:12:05.362 Cleanup processes 00:12:05.367 [Pipeline] sh 00:12:05.651 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:05.651 3730847 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:12:05.651 3731092 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:05.663 [Pipeline] sh 00:12:05.944 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:05.944 ++ grep -v 'sudo pgrep' 00:12:05.944 ++ awk '{print $1}' 00:12:05.944 + sudo kill -9 3730847 00:12:05.956 [Pipeline] sh 00:12:06.239 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:12:18.466 [Pipeline] sh 00:12:18.752 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:12:18.752 Artifacts sizes are good 00:12:18.767 [Pipeline] archiveArtifacts 00:12:18.774 Archiving artifacts 00:12:18.910 [Pipeline] sh 00:12:19.197 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:12:19.212 [Pipeline] cleanWs 00:12:19.223 [WS-CLEANUP] Deleting project workspace... 00:12:19.223 [WS-CLEANUP] Deferred wipeout is used... 00:12:19.230 [WS-CLEANUP] done 00:12:19.232 [Pipeline] } 00:12:19.249 [Pipeline] // catchError 00:12:19.260 [Pipeline] sh 00:12:19.540 + logger -p user.info -t JENKINS-CI 00:12:19.547 [Pipeline] } 00:12:19.562 [Pipeline] // stage 00:12:19.567 [Pipeline] } 00:12:19.582 [Pipeline] // node 00:12:19.595 [Pipeline] End of Pipeline 00:12:19.666 Finished: SUCCESS